Mar 13 00:36:58.964878 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:36:58.964933 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:36:58.964942 kernel: BIOS-provided physical RAM map: Mar 13 00:36:58.964949 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 13 00:36:58.964955 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 13 00:36:58.964961 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 00:36:58.964971 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 13 00:36:58.964977 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 13 00:36:58.964983 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 00:36:58.964989 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 00:36:58.964995 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:36:58.965002 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 00:36:58.965008 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 13 00:36:58.965014 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:36:58.965024 kernel: NX (Execute Disable) protection: active Mar 13 00:36:58.965030 kernel: APIC: Static calls initialized Mar 13 00:36:58.965037 kernel: SMBIOS 2.8 present. Mar 13 00:36:58.965044 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 13 00:36:58.965050 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:36:58.965057 kernel: Hypervisor detected: KVM Mar 13 00:36:58.965065 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 13 00:36:58.965072 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:36:58.965078 kernel: kvm-clock: using sched offset of 7366530410 cycles Mar 13 00:36:58.965085 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:36:58.965092 kernel: tsc: Detected 2000.000 MHz processor Mar 13 00:36:58.965099 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:36:58.965106 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:36:58.965113 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 13 00:36:58.965120 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 00:36:58.965127 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:36:58.965136 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 13 00:36:58.965142 kernel: Using GB pages for direct mapping Mar 13 00:36:58.965149 kernel: ACPI: Early table checksum verification disabled Mar 13 00:36:58.965156 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 13 00:36:58.965162 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:36:58.965169 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:36:58.965176 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:36:58.965183 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 13 00:36:58.965190 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:36:58.965199 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:36:58.965209 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:36:58.965216 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:36:58.965223 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 13 00:36:58.965230 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 13 00:36:58.965240 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 13 00:36:58.965247 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 13 00:36:58.965254 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 13 00:36:58.965261 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 13 00:36:58.965268 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 13 00:36:58.965275 kernel: No NUMA configuration found Mar 13 00:36:58.965282 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 13 00:36:58.965289 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Mar 13 00:36:58.965296 kernel: Zone ranges: Mar 13 00:36:58.965305 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:36:58.965312 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:36:58.965319 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:36:58.965327 kernel: Device empty Mar 13 00:36:58.965334 kernel: Movable zone start for each node Mar 13 00:36:58.965341 kernel: Early memory node ranges Mar 13 00:36:58.965348 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 00:36:58.965355 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 13 00:36:58.965362 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:36:58.965371 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 13 00:36:58.965378 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:36:58.965385 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 00:36:58.965393 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 13 00:36:58.965400 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:36:58.965407 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:36:58.965414 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:36:58.965421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:36:58.965428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:36:58.965437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:36:58.965444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:36:58.965452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:36:58.965459 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:36:58.965466 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:36:58.965473 kernel: TSC deadline timer available Mar 13 00:36:58.965480 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:36:58.965487 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:36:58.965493 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:36:58.965500 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:36:58.965509 kernel: CPU topo: Num. cores per package: 2 Mar 13 00:36:58.965516 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:36:58.965523 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:36:58.965530 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:36:58.965537 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:36:58.965544 kernel: kvm-guest: setup PV sched yield Mar 13 00:36:58.965551 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 00:36:58.965558 kernel: Booting paravirtualized kernel on KVM Mar 13 00:36:58.965565 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:36:58.965574 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:36:58.965581 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:36:58.965588 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:36:58.965595 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:36:58.965601 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:36:58.965608 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:36:58.965616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:36:58.965623 kernel: random: crng init done Mar 13 00:36:58.965632 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:36:58.965639 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:36:58.965646 kernel: Fallback order for Node 0: 0 Mar 13 00:36:58.965653 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Mar 13 00:36:58.965660 kernel: Policy zone: Normal Mar 13 00:36:58.965667 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:36:58.965674 kernel: software IO TLB: area num 2. Mar 13 00:36:58.965681 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:36:58.965688 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:36:58.965697 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:36:58.965704 kernel: Dynamic Preempt: voluntary Mar 13 00:36:58.965710 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:36:58.965718 kernel: rcu: RCU event tracing is enabled. Mar 13 00:36:58.965725 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:36:58.965732 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:36:58.965740 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:36:58.965747 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:36:58.965754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:36:58.965763 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:36:58.965770 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:36:58.965784 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:36:58.965794 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:36:58.965801 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:36:58.965808 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:36:58.965815 kernel: Console: colour VGA+ 80x25 Mar 13 00:36:58.965823 kernel: printk: legacy console [tty0] enabled Mar 13 00:36:58.965830 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:36:58.965837 kernel: ACPI: Core revision 20240827 Mar 13 00:36:58.965847 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:36:58.965854 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:36:58.965861 kernel: x2apic enabled Mar 13 00:36:58.965869 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:36:58.965876 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:36:58.965883 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:36:58.965913 kernel: kvm-guest: setup PV IPIs Mar 13 00:36:58.965923 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:36:58.965930 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Mar 13 00:36:58.965938 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Mar 13 00:36:58.965945 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:36:58.965952 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:36:58.965959 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:36:58.965967 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:36:58.965974 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:36:58.965981 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:36:58.965991 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 13 00:36:58.965998 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 00:36:58.966006 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 00:36:58.966013 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:36:58.966021 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:36:58.966028 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:36:58.966035 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:36:58.966043 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:36:58.966052 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:36:58.966059 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:36:58.966066 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:36:58.966074 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:36:58.966081 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 13 00:36:58.966088 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:36:58.966095 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 13 00:36:58.966102 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 13 00:36:58.966110 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:36:58.966119 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:36:58.966126 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:36:58.966133 kernel: landlock: Up and running. Mar 13 00:36:58.966140 kernel: SELinux: Initializing. Mar 13 00:36:58.966147 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:36:58.966154 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:36:58.966161 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:36:58.966168 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 13 00:36:58.966175 kernel: ... version: 0 Mar 13 00:36:58.966184 kernel: ... bit width: 48 Mar 13 00:36:58.966191 kernel: ... generic registers: 6 Mar 13 00:36:58.966198 kernel: ... value mask: 0000ffffffffffff Mar 13 00:36:58.966205 kernel: ... max period: 00007fffffffffff Mar 13 00:36:58.966212 kernel: ... fixed-purpose events: 0 Mar 13 00:36:58.966219 kernel: ... event mask: 000000000000003f Mar 13 00:36:58.966226 kernel: signal: max sigframe size: 3376 Mar 13 00:36:58.966233 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:36:58.966241 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:36:58.966250 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:36:58.966257 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:36:58.966264 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:36:58.966271 kernel: .... node #0, CPUs: #1 Mar 13 00:36:58.966278 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:36:58.966285 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Mar 13 00:36:58.966292 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 235480K reserved, 0K cma-reserved) Mar 13 00:36:58.966299 kernel: devtmpfs: initialized Mar 13 00:36:58.966307 kernel: x86/mm: Memory block size: 128MB Mar 13 00:36:58.966919 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:36:58.966932 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:36:58.966940 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:36:58.966947 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:36:58.966955 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:36:58.966962 kernel: audit: type=2000 audit(1773362216.493:1): state=initialized audit_enabled=0 res=1 Mar 13 00:36:58.966969 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:36:58.966976 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:36:58.966983 kernel: cpuidle: using governor menu Mar 13 00:36:58.966994 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:36:58.967002 kernel: dca service started, version 1.12.1 Mar 13 00:36:58.967009 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 00:36:58.967016 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 00:36:58.967023 kernel: PCI: Using configuration type 1 for base access Mar 13 00:36:58.967030 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:36:58.967037 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:36:58.967045 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:36:58.967052 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:36:58.967061 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:36:58.967068 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:36:58.967075 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:36:58.967082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:36:58.967089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:36:58.967097 kernel: ACPI: Interpreter enabled Mar 13 00:36:58.967104 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:36:58.967111 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:36:58.967118 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:36:58.967127 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:36:58.967134 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:36:58.967141 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:36:58.967344 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:36:58.967474 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:36:58.967805 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:36:58.967816 kernel: PCI host bridge to bus 0000:00 Mar 13 00:36:58.967980 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:36:58.968098 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:36:58.968209 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:36:58.968319 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 13 00:36:58.968428 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 00:36:58.968736 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 13 00:36:58.968847 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:36:58.969538 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:36:58.969686 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:36:58.969812 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 13 00:36:58.969956 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 13 00:36:58.970078 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 13 00:36:58.970198 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:36:58.970337 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:36:58.970459 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Mar 13 00:36:58.970579 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 13 00:36:58.970698 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 13 00:36:58.970836 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:36:58.970987 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Mar 13 00:36:58.971111 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 13 00:36:58.971237 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 13 00:36:58.971359 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 13 00:36:58.971491 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:36:58.971612 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:36:58.971741 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:36:58.971861 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Mar 13 00:36:58.972008 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Mar 13 00:36:58.972140 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:36:58.972260 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 00:36:58.972270 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:36:58.972278 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:36:58.972285 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:36:58.972293 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:36:58.972300 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:36:58.972311 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:36:58.972318 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:36:58.972325 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:36:58.972332 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:36:58.972340 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:36:58.972533 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:36:58.972541 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:36:58.972548 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:36:58.972555 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:36:58.972564 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:36:58.972571 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:36:58.972578 kernel: iommu: Default domain type: Translated Mar 13 00:36:58.972585 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:36:58.972592 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:36:58.972600 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:36:58.972607 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 13 00:36:58.972614 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 13 00:36:58.972733 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:36:58.972855 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:36:58.973015 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:36:58.973027 kernel: vgaarb: loaded Mar 13 00:36:58.973034 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:36:58.973042 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:36:58.973049 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:36:58.973056 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:36:58.973063 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:36:58.973074 kernel: pnp: PnP ACPI init Mar 13 00:36:58.973212 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 00:36:58.973224 kernel: pnp: PnP ACPI: found 5 devices Mar 13 00:36:58.973231 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:36:58.973238 kernel: NET: Registered PF_INET protocol family Mar 13 00:36:58.973245 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:36:58.973252 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:36:58.973260 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:36:58.973267 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:36:58.973277 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:36:58.973284 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:36:58.973291 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:36:58.973299 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:36:58.973306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:36:58.973313 kernel: NET: Registered PF_XDP protocol family Mar 13 00:36:58.973467 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:36:58.973766 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:36:58.973882 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:36:58.974056 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 13 00:36:58.974168 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 00:36:58.974278 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 13 00:36:58.974288 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:36:58.974296 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:36:58.974303 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 13 00:36:58.974311 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Mar 13 00:36:58.974318 kernel: Initialise system trusted keyrings Mar 13 00:36:58.974329 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:36:58.974336 kernel: Key type asymmetric registered Mar 13 00:36:58.974343 kernel: Asymmetric key parser 'x509' registered Mar 13 00:36:58.974351 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:36:58.974358 kernel: io scheduler mq-deadline registered Mar 13 00:36:58.974365 kernel: io scheduler kyber registered Mar 13 00:36:58.974372 kernel: io scheduler bfq registered Mar 13 00:36:58.974380 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:36:58.974387 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:36:58.974397 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:36:58.974404 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:36:58.974411 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:36:58.974419 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:36:58.974426 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:36:58.974433 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:36:58.974440 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:36:58.974573 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 13 00:36:58.974694 kernel: rtc_cmos 00:03: registered as rtc0 Mar 13 00:36:58.974809 kernel: rtc_cmos 00:03: setting system clock to 2026-03-13T00:36:58 UTC (1773362218) Mar 13 00:36:58.974954 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:36:58.974966 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:36:58.974974 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:36:58.974981 kernel: Segment Routing with IPv6 Mar 13 00:36:58.974988 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:36:58.974996 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:36:58.975003 kernel: Key type dns_resolver registered Mar 13 00:36:58.975014 kernel: IPI shorthand broadcast: enabled Mar 13 00:36:58.975022 kernel: sched_clock: Marking stable (3053006250, 374189990)->(3512993760, -85797520) Mar 13 00:36:58.975029 kernel: registered taskstats version 1 Mar 13 00:36:58.975037 kernel: Loading compiled-in X.509 certificates Mar 13 00:36:58.975044 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:36:58.975051 kernel: Demotion targets for Node 0: null Mar 13 00:36:58.975059 kernel: Key type .fscrypt registered Mar 13 00:36:58.975066 kernel: Key type fscrypt-provisioning registered Mar 13 00:36:58.975073 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:36:58.975083 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:36:58.975090 kernel: ima: No architecture policies found Mar 13 00:36:58.975098 kernel: clk: Disabling unused clocks Mar 13 00:36:58.975105 kernel: Warning: unable to open an initial console. Mar 13 00:36:58.975113 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:36:58.975120 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:36:58.975127 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:36:58.975135 kernel: Run /init as init process Mar 13 00:36:58.975144 kernel: with arguments: Mar 13 00:36:58.975152 kernel: /init Mar 13 00:36:58.975159 kernel: with environment: Mar 13 00:36:58.975181 kernel: HOME=/ Mar 13 00:36:58.975190 kernel: TERM=linux Mar 13 00:36:58.975199 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:36:58.975210 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:36:58.975218 systemd[1]: Detected virtualization kvm. Mar 13 00:36:58.975228 systemd[1]: Detected architecture x86-64. Mar 13 00:36:58.975236 systemd[1]: Running in initrd. Mar 13 00:36:58.975244 systemd[1]: No hostname configured, using default hostname. Mar 13 00:36:58.975252 systemd[1]: Hostname set to . Mar 13 00:36:58.975260 systemd[1]: Initializing machine ID from random generator. Mar 13 00:36:58.975268 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:36:58.975276 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:36:58.975284 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:36:58.975295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:36:58.975306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:36:58.975314 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:36:58.975323 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:36:58.975332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:36:58.975340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:36:58.975348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:36:58.975358 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:36:58.975366 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:36:58.975374 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:36:58.975383 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:36:58.975391 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:36:58.975399 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:36:58.975407 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:36:58.975415 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:36:58.975425 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:36:58.975433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:36:58.975441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:36:58.975451 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:36:58.975459 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:36:58.975467 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:36:58.975477 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:36:58.975485 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:36:58.975494 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:36:58.975502 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:36:58.975510 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:36:58.975518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:36:58.975526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:36:58.975534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:36:58.975545 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:36:58.975553 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:36:58.975582 systemd-journald[187]: Collecting audit messages is disabled. Mar 13 00:36:58.975604 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:36:58.975612 systemd-journald[187]: Journal started Mar 13 00:36:58.975630 systemd-journald[187]: Runtime Journal (/run/log/journal/6658aade3c7d4434a517580f3c50e3bf) is 8M, max 78.2M, 70.2M free. Mar 13 00:36:58.939105 systemd-modules-load[188]: Inserted module 'overlay' Mar 13 00:36:58.983935 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:36:58.988270 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:36:59.114777 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:36:59.114811 kernel: Bridge firewalling registered Mar 13 00:36:59.015156 systemd-modules-load[188]: Inserted module 'br_netfilter' Mar 13 00:36:59.024480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:36:59.118101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:36:59.120768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:36:59.126132 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:36:59.126536 systemd-tmpfiles[201]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:36:59.132170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:36:59.137007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:36:59.140014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:36:59.155646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:36:59.161077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:36:59.166042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:36:59.170193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:36:59.174006 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:36:59.195440 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:36:59.216183 systemd-resolved[222]: Positive Trust Anchors: Mar 13 00:36:59.216199 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:36:59.216227 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:36:59.224377 systemd-resolved[222]: Defaulting to hostname 'linux'. Mar 13 00:36:59.225874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:36:59.227054 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:36:59.308935 kernel: SCSI subsystem initialized Mar 13 00:36:59.318905 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:36:59.330907 kernel: iscsi: registered transport (tcp) Mar 13 00:36:59.355853 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:36:59.355911 kernel: QLogic iSCSI HBA Driver Mar 13 00:36:59.378788 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:36:59.397192 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:36:59.398501 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:36:59.451977 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:36:59.455164 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:36:59.507981 kernel: raid6: avx2x4 gen() 26355 MB/s Mar 13 00:36:59.525915 kernel: raid6: avx2x2 gen() 24941 MB/s Mar 13 00:36:59.544999 kernel: raid6: avx2x1 gen() 17412 MB/s Mar 13 00:36:59.545038 kernel: raid6: using algorithm avx2x4 gen() 26355 MB/s Mar 13 00:36:59.568968 kernel: raid6: .... xor() 4739 MB/s, rmw enabled Mar 13 00:36:59.569008 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:36:59.591947 kernel: xor: automatically using best checksumming function avx Mar 13 00:36:59.744925 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:36:59.752572 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:36:59.755309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:36:59.785520 systemd-udevd[434]: Using default interface naming scheme 'v255'. Mar 13 00:36:59.791421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:36:59.795055 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:36:59.815364 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Mar 13 00:36:59.843955 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:36:59.847313 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:36:59.925959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:36:59.930841 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:36:59.996913 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:37:00.007425 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Mar 13 00:37:00.007665 kernel: AES CTR mode by8 optimization enabled Mar 13 00:37:00.031932 kernel: scsi host0: Virtio SCSI HBA Mar 13 00:37:00.044939 kernel: libata version 3.00 loaded. Mar 13 00:37:00.052037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:37:00.062270 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:37:00.052970 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:37:00.059986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:37:00.067071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:37:00.416980 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 13 00:37:00.416405 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:37:00.425139 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:37:00.425348 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:37:00.484115 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:37:00.484433 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:37:00.484596 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:37:00.495286 kernel: scsi host1: ahci Mar 13 00:37:00.505926 kernel: scsi host2: ahci Mar 13 00:37:00.508923 kernel: scsi host3: ahci Mar 13 00:37:00.509162 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 13 00:37:00.509371 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 13 00:37:00.509531 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 00:37:00.509696 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 13 00:37:00.509847 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 13 00:37:00.512922 kernel: scsi host4: ahci Mar 13 00:37:00.516915 kernel: scsi host5: ahci Mar 13 00:37:00.518917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:37:00.518945 kernel: GPT:9289727 != 167739391 Mar 13 00:37:00.518957 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:37:00.518967 kernel: GPT:9289727 != 167739391 Mar 13 00:37:00.518977 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:37:00.518987 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:37:00.519920 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 00:37:00.522990 kernel: scsi host6: ahci Mar 13 00:37:00.523190 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Mar 13 00:37:00.523204 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Mar 13 00:37:00.523215 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Mar 13 00:37:00.523226 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Mar 13 00:37:00.523237 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Mar 13 00:37:00.523247 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Mar 13 00:37:00.662231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:37:00.836919 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:37:00.837006 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:37:00.838950 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:37:00.844042 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:37:00.844921 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:37:00.850248 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 13 00:37:00.920052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 13 00:37:00.930599 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 13 00:37:00.947742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:37:00.949059 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:37:00.958513 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 13 00:37:00.959549 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 13 00:37:00.963203 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:37:00.964137 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:37:00.966239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:37:00.970000 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:37:00.972004 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:37:00.997868 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:37:01.002905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:37:01.003283 disk-uuid[611]: Primary Header is updated. Mar 13 00:37:01.003283 disk-uuid[611]: Secondary Entries is updated. Mar 13 00:37:01.003283 disk-uuid[611]: Secondary Header is updated. Mar 13 00:37:02.028946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:37:02.031108 disk-uuid[619]: The operation has completed successfully. Mar 13 00:37:02.086446 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:37:02.086587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:37:02.112446 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:37:02.135596 sh[633]: Success Mar 13 00:37:02.159363 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:37:02.159405 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:37:02.164870 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:37:02.177051 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:37:02.224573 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:37:02.229106 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:37:02.239980 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:37:02.258919 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (645) Mar 13 00:37:02.264328 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:37:02.264580 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:37:02.279060 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:37:02.279088 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:37:02.282211 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:37:02.287138 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:37:02.288752 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:37:02.290138 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:37:02.290942 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:37:02.296017 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:37:02.330912 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (678) Mar 13 00:37:02.334979 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:37:02.337912 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:37:02.348008 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:37:02.348033 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:37:02.348046 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:37:02.355915 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:37:02.357359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:37:02.361087 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:37:02.457965 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:37:02.462022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:37:02.477517 ignition[748]: Ignition 2.22.0 Mar 13 00:37:02.477530 ignition[748]: Stage: fetch-offline Mar 13 00:37:02.477560 ignition[748]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:37:02.481127 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:37:02.477570 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:37:02.477649 ignition[748]: parsed url from cmdline: "" Mar 13 00:37:02.477654 ignition[748]: no config URL provided Mar 13 00:37:02.477659 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:37:02.477667 ignition[748]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:37:02.477672 ignition[748]: failed to fetch config: resource requires networking Mar 13 00:37:02.477819 ignition[748]: Ignition finished successfully Mar 13 00:37:02.511182 systemd-networkd[818]: lo: Link UP Mar 13 00:37:02.511195 systemd-networkd[818]: lo: Gained carrier Mar 13 00:37:02.513171 systemd-networkd[818]: Enumeration completed Mar 13 00:37:02.513259 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:37:02.514622 systemd[1]: Reached target network.target - Network. Mar 13 00:37:02.514719 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:37:02.514723 systemd-networkd[818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:37:02.518751 systemd-networkd[818]: eth0: Link UP Mar 13 00:37:02.519087 systemd-networkd[818]: eth0: Gained carrier Mar 13 00:37:02.519112 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:37:02.523048 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:37:02.560801 ignition[822]: Ignition 2.22.0 Mar 13 00:37:02.560819 ignition[822]: Stage: fetch Mar 13 00:37:02.560973 ignition[822]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:37:02.560986 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:37:02.561078 ignition[822]: parsed url from cmdline: "" Mar 13 00:37:02.561083 ignition[822]: no config URL provided Mar 13 00:37:02.561089 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:37:02.561100 ignition[822]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:37:02.561129 ignition[822]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 13 00:37:02.561558 ignition[822]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:37:02.762238 ignition[822]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 13 00:37:02.762435 ignition[822]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:37:03.162884 ignition[822]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 13 00:37:03.163112 ignition[822]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:37:03.600007 systemd-networkd[818]: eth0: DHCPv4 address 172.234.197.95/24, gateway 172.234.197.1 acquired from 23.205.167.222 Mar 13 00:37:03.818067 systemd-networkd[818]: eth0: Gained IPv6LL Mar 13 00:37:03.963243 ignition[822]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 13 00:37:04.064103 ignition[822]: PUT result: OK Mar 13 00:37:04.064155 ignition[822]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 13 00:37:04.179231 ignition[822]: GET result: OK Mar 13 00:37:04.179358 ignition[822]: parsing config with SHA512: 237db75e6edadba68c108d8979516d91aa1a01928e8d863a183c61ea6caf5f479ab991cad4995f447a7d23a6b0661297ff98b40d28f106379fcd1200d55f0271 Mar 13 00:37:04.183206 unknown[822]: fetched base config from "system" Mar 13 00:37:04.183216 unknown[822]: fetched base config from "system" Mar 13 00:37:04.187137 ignition[822]: fetch: fetch complete Mar 13 00:37:04.183223 unknown[822]: fetched user config from "akamai" Mar 13 00:37:04.187146 ignition[822]: fetch: fetch passed Mar 13 00:37:04.193221 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:37:04.187192 ignition[822]: Ignition finished successfully Mar 13 00:37:04.211009 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:37:04.247497 ignition[830]: Ignition 2.22.0 Mar 13 00:37:04.247514 ignition[830]: Stage: kargs Mar 13 00:37:04.247947 ignition[830]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:37:04.247961 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:37:04.248728 ignition[830]: kargs: kargs passed Mar 13 00:37:04.248772 ignition[830]: Ignition finished successfully Mar 13 00:37:04.252226 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:37:04.255368 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:37:04.288344 ignition[837]: Ignition 2.22.0 Mar 13 00:37:04.288362 ignition[837]: Stage: disks Mar 13 00:37:04.288685 ignition[837]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:37:04.288696 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:37:04.292339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:37:04.289279 ignition[837]: disks: disks passed Mar 13 00:37:04.293743 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:37:04.289322 ignition[837]: Ignition finished successfully Mar 13 00:37:04.295048 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:37:04.296688 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:37:04.298044 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:37:04.299748 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:37:04.302350 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:37:04.330438 systemd-fsck[846]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:37:04.333991 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:37:04.336763 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:37:04.451914 kernel: EXT4-fs (sda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:37:04.453207 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:37:04.454802 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:37:04.456785 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:37:04.459961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:37:04.462302 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:37:04.462354 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:37:04.462380 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:37:04.471290 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:37:04.476040 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:37:04.479282 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (854) Mar 13 00:37:04.484229 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:37:04.484257 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:37:04.494097 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:37:04.494126 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:37:04.498129 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:37:04.500459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:37:04.546695 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:37:04.553726 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:37:04.558920 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:37:04.564913 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:37:04.664424 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:37:04.666810 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:37:04.669066 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:37:04.683088 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:37:04.686944 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:37:04.701011 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:37:04.720946 ignition[969]: INFO : Ignition 2.22.0 Mar 13 00:37:04.720946 ignition[969]: INFO : Stage: mount Mar 13 00:37:04.720946 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:37:04.720946 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:37:04.725767 ignition[969]: INFO : mount: mount passed Mar 13 00:37:04.725767 ignition[969]: INFO : Ignition finished successfully Mar 13 00:37:04.723526 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:37:04.726977 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:37:05.455105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:37:05.480926 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (980) Mar 13 00:37:05.480963 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:37:05.484298 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:37:05.494858 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:37:05.494882 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:37:05.494914 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:37:05.499319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:37:05.533332 ignition[997]: INFO : Ignition 2.22.0 Mar 13 00:37:05.533332 ignition[997]: INFO : Stage: files Mar 13 00:37:05.535295 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:37:05.535295 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:37:05.535295 ignition[997]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:37:05.535295 ignition[997]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:37:05.535295 ignition[997]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:37:05.540825 ignition[997]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:37:05.540825 ignition[997]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:37:05.540825 ignition[997]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:37:05.540825 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:37:05.540825 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:37:05.538649 unknown[997]: wrote ssh authorized keys file for user: core Mar 13 00:37:05.633185 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:37:05.677831 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:37:05.679522 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:37:05.689977 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:37:05.689977 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:37:05.689977 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:37:05.689977 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:37:05.689977 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:37:05.689977 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 13 00:37:06.008297 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:37:07.133470 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:37:07.133470 ignition[997]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:37:07.137315 ignition[997]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:37:07.137315 ignition[997]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:37:07.137315 ignition[997]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:37:07.137315 ignition[997]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:37:07.142232 ignition[997]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:37:07.142232 ignition[997]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:37:07.142232 ignition[997]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:37:07.142232 ignition[997]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:37:07.142232 ignition[997]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:37:07.142232 ignition[997]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:37:07.142232 ignition[997]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:37:07.142232 ignition[997]: INFO : files: files passed Mar 13 00:37:07.142232 ignition[997]: INFO : Ignition finished successfully Mar 13 00:37:07.141511 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:37:07.145568 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:37:07.160045 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:37:07.163284 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:37:07.163453 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:37:07.181185 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:37:07.181185 initrd-setup-root-after-ignition[1027]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:37:07.184114 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:37:07.184574 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:37:07.186824 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:37:07.189098 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:37:07.244750 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:37:07.244914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:37:07.247348 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:37:07.248624 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:37:07.250866 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:37:07.252656 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:37:07.274827 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:37:07.278823 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:37:07.303654 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:37:07.304556 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:37:07.306557 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:37:07.308373 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:37:07.308525 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:37:07.310719 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:37:07.311860 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:37:07.313942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:37:07.315399 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:37:07.317314 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:37:07.319043 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:37:07.320715 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:37:07.322505 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:37:07.324619 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:37:07.326461 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:37:07.328250 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:37:07.329993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:37:07.330137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:37:07.331975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:37:07.333154 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:37:07.334919 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:37:07.335958 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:37:07.337787 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:37:07.337918 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:37:07.340190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:37:07.340357 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:37:07.341537 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:37:07.341943 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:37:07.346079 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:37:07.347045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:37:07.347202 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:37:07.351209 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:37:07.353990 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:37:07.354148 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:37:07.355979 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:37:07.356080 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:37:07.367688 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:37:07.393709 ignition[1051]: INFO : Ignition 2.22.0 Mar 13 00:37:07.393709 ignition[1051]: INFO : Stage: umount Mar 13 00:37:07.393709 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:37:07.393709 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:37:07.367799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:37:07.403144 ignition[1051]: INFO : umount: umount passed Mar 13 00:37:07.403144 ignition[1051]: INFO : Ignition finished successfully Mar 13 00:37:07.398177 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:37:07.399011 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:37:07.404133 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:37:07.404200 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:37:07.405140 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:37:07.405194 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:37:07.408054 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:37:07.408129 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:37:07.409356 systemd[1]: Stopped target network.target - Network. Mar 13 00:37:07.412016 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:37:07.412117 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:37:07.414519 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:37:07.415334 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:37:07.417203 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:37:07.418872 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:37:07.425086 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:37:07.428995 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:37:07.429069 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:37:07.431318 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:37:07.431374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:37:07.433206 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:37:07.433281 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:37:07.435047 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:37:07.435124 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:37:07.438339 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:37:07.439180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:37:07.443238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:37:07.449238 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:37:07.449358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:37:07.452424 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:37:07.452697 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:37:07.452825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:37:07.456633 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:37:07.456925 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:37:07.457049 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:37:07.459751 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:37:07.461195 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:37:07.461242 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:37:07.463061 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:37:07.463116 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:37:07.465978 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:37:07.467442 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:37:07.467499 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:37:07.469157 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:37:07.469207 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:37:07.472087 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:37:07.472139 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:37:07.474039 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:37:07.474108 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:37:07.476034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:37:07.481209 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:37:07.481287 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:37:07.496528 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:37:07.496729 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:37:07.499563 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:37:07.499831 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:37:07.501443 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:37:07.501486 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:37:07.502977 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:37:07.503030 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:37:07.505290 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:37:07.505343 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:37:07.506994 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:37:07.507047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:37:07.511046 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:37:07.512627 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:37:07.512691 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:37:07.518068 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:37:07.518146 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:37:07.520008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:37:07.520064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:37:07.523378 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:37:07.523469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:37:07.523524 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:37:07.523997 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:37:07.524129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:37:07.531298 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:37:07.531433 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:37:07.532786 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:37:07.535239 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:37:07.553633 systemd[1]: Switching root. Mar 13 00:37:07.594323 systemd-journald[187]: Journal stopped Mar 13 00:37:08.943471 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Mar 13 00:37:08.943508 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:37:08.943521 kernel: SELinux: policy capability open_perms=1 Mar 13 00:37:08.943531 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:37:08.943540 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:37:08.943552 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:37:08.943562 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:37:08.943572 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:37:08.943581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:37:08.943590 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:37:08.943788 kernel: audit: type=1403 audit(1773362227.776:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:37:08.943798 systemd[1]: Successfully loaded SELinux policy in 100.512ms. Mar 13 00:37:08.943812 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.910ms. Mar 13 00:37:08.943824 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:37:08.943835 systemd[1]: Detected virtualization kvm. Mar 13 00:37:08.943846 systemd[1]: Detected architecture x86-64. Mar 13 00:37:08.943858 systemd[1]: Detected first boot. Mar 13 00:37:08.943869 systemd[1]: Initializing machine ID from random generator. Mar 13 00:37:08.943879 zram_generator::config[1094]: No configuration found. Mar 13 00:37:08.943910 kernel: Guest personality initialized and is inactive Mar 13 00:37:08.943921 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:37:08.943931 kernel: Initialized host personality Mar 13 00:37:08.943941 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:37:08.943952 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:37:08.943967 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:37:08.943977 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:37:08.943987 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:37:08.943998 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:37:08.944008 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:37:08.944019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:37:08.944030 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:37:08.944044 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:37:08.944054 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:37:08.944065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:37:08.944076 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:37:08.944086 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:37:08.944096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:37:08.944107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:37:08.944117 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:37:08.944130 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:37:08.944144 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:37:08.944155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:37:08.944166 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:37:08.944176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:37:08.944187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:37:08.944197 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:37:08.944210 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:37:08.944221 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:37:08.944232 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:37:08.944244 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:37:08.944255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:37:08.944265 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:37:08.944275 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:37:08.944286 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:37:08.944296 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:37:08.944310 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:37:08.944320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:37:08.944331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:37:08.944341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:37:08.944355 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:37:08.944365 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:37:08.944376 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:37:08.944386 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:37:08.944397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:08.944408 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:37:08.944418 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:37:08.944429 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:37:08.944443 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:37:08.944453 systemd[1]: Reached target machines.target - Containers. Mar 13 00:37:08.944464 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:37:08.944476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:37:08.944486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:37:08.944497 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:37:08.944508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:37:08.944519 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:37:08.944529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:37:08.944542 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:37:08.944553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:37:08.944564 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:37:08.944574 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:37:08.944585 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:37:08.944595 kernel: ACPI: bus type drm_connector registered Mar 13 00:37:08.944605 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:37:08.944615 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:37:08.944818 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:37:08.944829 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:37:08.944839 kernel: loop: module loaded Mar 13 00:37:08.944849 kernel: fuse: init (API version 7.41) Mar 13 00:37:08.944859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:37:08.944870 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:37:08.944881 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:37:08.944912 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:37:08.944927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:37:08.944938 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:37:08.944948 systemd[1]: Stopped verity-setup.service. Mar 13 00:37:08.944959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:08.944969 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:37:08.944980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:37:08.944990 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:37:08.945001 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:37:08.945011 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:37:08.945048 systemd-journald[1182]: Collecting audit messages is disabled. Mar 13 00:37:08.945068 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:37:08.945080 systemd-journald[1182]: Journal started Mar 13 00:37:08.945103 systemd-journald[1182]: Runtime Journal (/run/log/journal/dd29aca8a2e546d6823c057deddafd49) is 8M, max 78.2M, 70.2M free. Mar 13 00:37:08.493216 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:37:08.509157 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 00:37:08.510296 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:37:08.947938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:37:08.952920 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:37:08.954093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:37:08.955255 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:37:08.955581 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:37:08.956871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:37:08.957218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:37:08.958418 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:37:08.958754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:37:08.960077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:37:08.960426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:37:08.961585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:37:08.961965 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:37:08.963075 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:37:08.963348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:37:08.964706 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:37:08.966271 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:37:08.967839 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:37:08.969258 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:37:08.989747 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:37:08.995978 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:37:09.001050 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:37:09.002084 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:37:09.002121 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:37:09.004090 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:37:09.010017 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:37:09.013101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:37:09.016101 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:37:09.020001 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:37:09.022152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:37:09.025001 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:37:09.025801 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:37:09.030142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:37:09.033716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:37:09.040063 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:37:09.043864 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:37:09.046108 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:37:09.068149 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:37:09.069444 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:37:09.076183 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:37:09.091411 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:37:09.095178 systemd-journald[1182]: Time spent on flushing to /var/log/journal/dd29aca8a2e546d6823c057deddafd49 is 28.015ms for 1010 entries. Mar 13 00:37:09.095178 systemd-journald[1182]: System Journal (/var/log/journal/dd29aca8a2e546d6823c057deddafd49) is 8M, max 195.6M, 187.6M free. Mar 13 00:37:09.136674 systemd-journald[1182]: Received client request to flush runtime journal. Mar 13 00:37:09.137958 kernel: loop0: detected capacity change from 0 to 128560 Mar 13 00:37:09.145782 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:37:09.151350 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:37:09.155230 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:37:09.166952 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:37:09.189026 kernel: loop1: detected capacity change from 0 to 110984 Mar 13 00:37:09.200101 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:37:09.210170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:37:09.239947 kernel: loop2: detected capacity change from 0 to 8 Mar 13 00:37:09.266818 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 13 00:37:09.267182 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 13 00:37:09.273140 kernel: loop3: detected capacity change from 0 to 228704 Mar 13 00:37:09.274377 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:37:09.315867 kernel: loop4: detected capacity change from 0 to 128560 Mar 13 00:37:09.335905 kernel: loop5: detected capacity change from 0 to 110984 Mar 13 00:37:09.346913 kernel: loop6: detected capacity change from 0 to 8 Mar 13 00:37:09.351119 kernel: loop7: detected capacity change from 0 to 228704 Mar 13 00:37:09.376839 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 13 00:37:09.377535 (sd-merge)[1246]: Merged extensions into '/usr'. Mar 13 00:37:09.382403 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:37:09.382504 systemd[1]: Reloading... Mar 13 00:37:09.440939 zram_generator::config[1268]: No configuration found. Mar 13 00:37:09.623992 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:37:09.715307 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:37:09.715402 systemd[1]: Reloading finished in 332 ms. Mar 13 00:37:09.735310 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:37:09.736980 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:37:09.750519 systemd[1]: Starting ensure-sysext.service... Mar 13 00:37:09.755010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:37:09.772464 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:37:09.772574 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:37:09.772607 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:37:09.772911 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:37:09.773171 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:37:09.774574 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:37:09.774835 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Mar 13 00:37:09.774933 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Mar 13 00:37:09.777144 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:37:09.779743 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:37:09.779761 systemd-tmpfiles[1316]: Skipping /boot Mar 13 00:37:09.783137 systemd[1]: Reload requested from client PID 1315 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:37:09.783231 systemd[1]: Reloading... Mar 13 00:37:09.791468 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:37:09.791485 systemd-tmpfiles[1316]: Skipping /boot Mar 13 00:37:09.842308 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Mar 13 00:37:09.891972 zram_generator::config[1344]: No configuration found. Mar 13 00:37:10.157505 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:37:10.157626 systemd[1]: Reloading finished in 373 ms. Mar 13 00:37:10.166160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:37:10.167950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:37:10.199120 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:37:10.208914 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:37:10.224148 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:37:10.231995 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:37:10.235103 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:37:10.243353 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:37:10.243625 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:37:10.244477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:37:10.253837 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:37:10.257232 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:37:10.272208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:10.272378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:37:10.274651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:37:10.286209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:37:10.290144 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:37:10.292035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:37:10.292126 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:37:10.297972 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:37:10.298986 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:10.312371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:10.312536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:37:10.312686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:37:10.312759 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:37:10.312829 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:10.316966 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:37:10.324138 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:37:10.332165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:10.333129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:37:10.342094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:37:10.344747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:37:10.344839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:37:10.344976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:37:10.363981 systemd[1]: Finished ensure-sysext.service. Mar 13 00:37:10.366498 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:37:10.380292 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:37:10.387947 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:37:10.400069 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:37:10.402380 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:37:10.404538 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:37:10.430672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:37:10.432038 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:37:10.436248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:37:10.436646 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:37:10.443838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:37:10.452136 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:37:10.452351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:37:10.455744 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:37:10.456102 augenrules[1479]: No rules Mar 13 00:37:10.456570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:37:10.459279 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:37:10.459587 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:37:10.466102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:37:10.482376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:37:10.503908 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:37:10.504846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:37:10.515797 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:37:10.544871 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:37:10.584649 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:37:10.698855 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:37:10.751109 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:37:10.752113 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:37:10.773564 systemd-networkd[1427]: lo: Link UP Mar 13 00:37:10.773833 systemd-networkd[1427]: lo: Gained carrier Mar 13 00:37:10.776001 systemd-networkd[1427]: Enumeration completed Mar 13 00:37:10.776122 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:37:10.778461 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:37:10.779007 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:37:10.780089 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:37:10.781193 systemd-networkd[1427]: eth0: Link UP Mar 13 00:37:10.782185 systemd-networkd[1427]: eth0: Gained carrier Mar 13 00:37:10.782514 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:37:10.782719 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:37:10.788279 systemd-resolved[1431]: Positive Trust Anchors: Mar 13 00:37:10.788292 systemd-resolved[1431]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:37:10.788319 systemd-resolved[1431]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:37:10.792531 systemd-resolved[1431]: Defaulting to hostname 'linux'. Mar 13 00:37:10.794088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:37:10.794988 systemd[1]: Reached target network.target - Network. Mar 13 00:37:10.795746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:37:10.796570 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:37:10.797509 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:37:10.798350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:37:10.799295 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:37:10.801150 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:37:10.802092 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:37:10.803024 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:37:10.804050 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:37:10.804194 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:37:10.805122 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:37:10.807045 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:37:10.810121 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:37:10.812643 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:37:10.813586 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:37:10.814372 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:37:10.817610 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:37:10.818637 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:37:10.820315 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:37:10.821355 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:37:10.845746 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:37:10.846465 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:37:10.847232 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:37:10.847271 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:37:10.848607 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:37:10.851998 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:37:10.860979 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:37:10.865329 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:37:10.871157 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:37:10.882156 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:37:10.883755 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:37:10.885853 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:37:10.889077 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:37:10.894044 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:37:10.902127 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:37:10.907638 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:37:10.907975 jq[1515]: false Mar 13 00:37:10.918561 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:37:10.920578 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:37:10.922111 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:37:10.922670 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:37:10.935434 extend-filesystems[1516]: Found /dev/sda6 Mar 13 00:37:10.941757 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:37:10.944920 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Mar 13 00:37:10.943776 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Mar 13 00:37:10.947908 extend-filesystems[1516]: Found /dev/sda9 Mar 13 00:37:10.949246 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Mar 13 00:37:10.949287 oslogin_cache_refresh[1519]: Failure getting users, quitting Mar 13 00:37:10.949346 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:37:10.949373 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:37:10.949460 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Mar 13 00:37:10.949490 oslogin_cache_refresh[1519]: Refreshing group entry cache Mar 13 00:37:10.950030 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Mar 13 00:37:10.951914 oslogin_cache_refresh[1519]: Failure getting groups, quitting Mar 13 00:37:10.953991 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:37:10.954037 extend-filesystems[1516]: Checking size of /dev/sda9 Mar 13 00:37:10.951931 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:37:10.957600 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:37:10.958683 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:37:10.958952 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:37:10.959274 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:37:10.960103 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:37:10.962563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:37:10.962784 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:37:10.982791 update_engine[1527]: I20260313 00:37:10.982722 1527 main.cc:92] Flatcar Update Engine starting Mar 13 00:37:10.990115 coreos-metadata[1512]: Mar 13 00:37:10.988 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 13 00:37:10.990325 jq[1535]: true Mar 13 00:37:10.999282 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:37:11.019316 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:37:11.019602 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:37:11.024915 extend-filesystems[1516]: Resized partition /dev/sda9 Mar 13 00:37:11.035933 extend-filesystems[1561]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:37:11.038016 tar[1541]: linux-amd64/LICENSE Mar 13 00:37:11.038016 tar[1541]: linux-amd64/helm Mar 13 00:37:11.043681 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 13 00:37:11.049933 jq[1556]: true Mar 13 00:37:11.063129 dbus-daemon[1513]: [system] SELinux support is enabled Mar 13 00:37:11.063594 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:37:11.069585 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:37:11.069966 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:37:11.071965 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:37:11.071985 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:37:11.094188 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:37:11.095938 update_engine[1527]: I20260313 00:37:11.095518 1527 update_check_scheduler.cc:74] Next update check in 3m2s Mar 13 00:37:11.104645 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:37:11.217086 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:37:11.217122 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:37:11.228767 systemd-logind[1525]: New seat seat0. Mar 13 00:37:11.233951 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:37:11.244290 containerd[1547]: time="2026-03-13T00:37:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:37:11.247834 containerd[1547]: time="2026-03-13T00:37:11.247515010Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:37:11.258085 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:37:11.262136 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:37:11.269526 systemd[1]: Starting sshkeys.service... Mar 13 00:37:11.272935 containerd[1547]: time="2026-03-13T00:37:11.272563070Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.46µs" Mar 13 00:37:11.272935 containerd[1547]: time="2026-03-13T00:37:11.272590680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:37:11.272935 containerd[1547]: time="2026-03-13T00:37:11.272610990Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:37:11.273006 containerd[1547]: time="2026-03-13T00:37:11.272987280Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:37:11.273025 containerd[1547]: time="2026-03-13T00:37:11.273004210Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:37:11.273109 containerd[1547]: time="2026-03-13T00:37:11.273030380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:37:11.273954 containerd[1547]: time="2026-03-13T00:37:11.273126150Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:37:11.273954 containerd[1547]: time="2026-03-13T00:37:11.273156580Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:37:11.273954 containerd[1547]: time="2026-03-13T00:37:11.273512290Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:37:11.273954 containerd[1547]: time="2026-03-13T00:37:11.273532960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:37:11.273954 containerd[1547]: time="2026-03-13T00:37:11.273552350Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:37:11.273954 containerd[1547]: time="2026-03-13T00:37:11.273568470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:37:11.273954 containerd[1547]: time="2026-03-13T00:37:11.273694640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:37:11.279311 containerd[1547]: time="2026-03-13T00:37:11.278118070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:37:11.279311 containerd[1547]: time="2026-03-13T00:37:11.278156070Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:37:11.279311 containerd[1547]: time="2026-03-13T00:37:11.278166790Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:37:11.279311 containerd[1547]: time="2026-03-13T00:37:11.278202440Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:37:11.279311 containerd[1547]: time="2026-03-13T00:37:11.278470640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:37:11.279311 containerd[1547]: time="2026-03-13T00:37:11.278552840Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.304879740Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.304959860Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.304973930Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305054600Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305069440Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305079090Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305112140Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305122900Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305132390Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305141260Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305148960Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305159540Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305320240Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:37:11.305799 containerd[1547]: time="2026-03-13T00:37:11.305359010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305380850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305392600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305402200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305431120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305441600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305450830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305461600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305471000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305485530Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305545140Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305556650Z" level=info msg="Start snapshots syncer" Mar 13 00:37:11.306103 containerd[1547]: time="2026-03-13T00:37:11.305604580Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:37:11.306666 containerd[1547]: time="2026-03-13T00:37:11.306531580Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:37:11.306666 containerd[1547]: time="2026-03-13T00:37:11.306603950Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:37:11.307731 containerd[1547]: time="2026-03-13T00:37:11.307517260Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.308366320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.310752960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.310767980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.310778060Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.310809520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.310821050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.310830990Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.311177230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.311193720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:37:11.312123 containerd[1547]: time="2026-03-13T00:37:11.311204780Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313098330Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313129540Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313143480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313357130Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313371270Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313385720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313438020Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313586280Z" level=info msg="runtime interface created" Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313598410Z" level=info msg="created NRI interface" Mar 13 00:37:11.313670 containerd[1547]: time="2026-03-13T00:37:11.313612650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:37:11.314650 containerd[1547]: time="2026-03-13T00:37:11.314346720Z" level=info msg="Connect containerd service" Mar 13 00:37:11.314650 containerd[1547]: time="2026-03-13T00:37:11.314383350Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:37:11.316138 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:37:11.320133 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:37:11.321708 containerd[1547]: time="2026-03-13T00:37:11.321367130Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:37:11.363365 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:37:11.426916 coreos-metadata[1597]: Mar 13 00:37:11.426 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 13 00:37:11.429921 containerd[1547]: time="2026-03-13T00:37:11.429856620Z" level=info msg="Start subscribing containerd event" Mar 13 00:37:11.430071 containerd[1547]: time="2026-03-13T00:37:11.430042200Z" level=info msg="Start recovering state" Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430226990Z" level=info msg="Start event monitor" Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430244910Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430267670Z" level=info msg="Start streaming server" Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430285220Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430295330Z" level=info msg="runtime interface starting up..." Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430305250Z" level=info msg="starting plugins..." Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430322710Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430719730Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:37:11.430912 containerd[1547]: time="2026-03-13T00:37:11.430797310Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:37:11.430979 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:37:11.432059 containerd[1547]: time="2026-03-13T00:37:11.432037300Z" level=info msg="containerd successfully booted in 0.188830s" Mar 13 00:37:11.465910 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 13 00:37:11.479160 extend-filesystems[1561]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 13 00:37:11.479160 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 13 00:37:11.479160 extend-filesystems[1561]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 13 00:37:11.484866 extend-filesystems[1516]: Resized filesystem in /dev/sda9 Mar 13 00:37:11.481081 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:37:11.481939 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:37:11.536032 dbus-daemon[1513]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1427 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 00:37:11.536966 systemd-networkd[1427]: eth0: DHCPv4 address 172.234.197.95/24, gateway 172.234.197.1 acquired from 23.205.167.222 Mar 13 00:37:11.539320 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 13 00:37:11.543490 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 00:37:11.641825 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:37:11.683311 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:37:11.690856 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 00:37:11.686520 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:37:11.687435 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 00:37:11.692971 dbus-daemon[1513]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1615 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 00:37:11.699193 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 00:37:11.711960 tar[1541]: linux-amd64/README.md Mar 13 00:37:11.723791 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:37:11.726071 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:37:11.731032 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:37:11.736478 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:37:11.752511 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:37:11.756251 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:37:11.762237 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:37:11.764168 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:37:11.786080 polkitd[1625]: Started polkitd version 126 Mar 13 00:37:11.789553 polkitd[1625]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 00:37:11.790083 polkitd[1625]: Loading rules from directory /run/polkit-1/rules.d Mar 13 00:37:11.790192 polkitd[1625]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:37:11.790425 polkitd[1625]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 00:37:11.790486 polkitd[1625]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:37:11.790751 polkitd[1625]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 00:37:11.791267 polkitd[1625]: Finished loading, compiling and executing 2 rules Mar 13 00:37:11.791519 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 00:37:11.792691 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 00:37:11.793258 polkitd[1625]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 00:37:11.801608 systemd-hostnamed[1615]: Hostname set to <172-234-197-95> (transient) Mar 13 00:37:11.802075 systemd-resolved[1431]: System hostname changed to '172-234-197-95'. Mar 13 00:37:12.003651 coreos-metadata[1512]: Mar 13 00:37:12.003 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 13 00:37:12.074110 systemd-networkd[1427]: eth0: Gained IPv6LL Mar 13 00:37:12.075003 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 13 00:37:12.077883 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:37:12.079410 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:37:12.083116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:37:12.087212 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:37:12.100414 coreos-metadata[1512]: Mar 13 00:37:12.100 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 13 00:37:12.117607 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:37:12.287948 coreos-metadata[1512]: Mar 13 00:37:12.287 INFO Fetch successful Mar 13 00:37:12.287948 coreos-metadata[1512]: Mar 13 00:37:12.287 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 13 00:37:12.445281 coreos-metadata[1597]: Mar 13 00:37:12.444 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 13 00:37:12.534924 coreos-metadata[1597]: Mar 13 00:37:12.534 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 13 00:37:12.546770 coreos-metadata[1512]: Mar 13 00:37:12.546 INFO Fetch successful Mar 13 00:37:12.673281 coreos-metadata[1597]: Mar 13 00:37:12.673 INFO Fetch successful Mar 13 00:37:12.688353 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 13 00:37:12.689060 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:37:12.693587 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:37:12.696114 update-ssh-keys[1679]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:37:12.697483 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:37:12.700663 systemd[1]: Finished sshkeys.service. Mar 13 00:37:13.030241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:37:13.033370 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:37:13.039040 systemd[1]: Startup finished in 3.126s (kernel) + 9.079s (initrd) + 5.361s (userspace) = 17.567s. Mar 13 00:37:13.041417 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:37:13.589497 kubelet[1688]: E0313 00:37:13.589440 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:37:13.592529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:37:13.592747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:37:13.593212 systemd[1]: kubelet.service: Consumed 942ms CPU time, 265.6M memory peak. Mar 13 00:37:14.058677 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 13 00:37:14.636486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:37:14.638247 systemd[1]: Started sshd@0-172.234.197.95:22-68.220.241.50:39114.service - OpenSSH per-connection server daemon (68.220.241.50:39114). Mar 13 00:37:14.807768 sshd[1699]: Accepted publickey for core from 68.220.241.50 port 39114 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:37:14.810163 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:14.819572 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:37:14.820771 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:37:14.828617 systemd-logind[1525]: New session 1 of user core. Mar 13 00:37:14.840647 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:37:14.844476 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:37:14.861558 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:37:14.864238 systemd-logind[1525]: New session c1 of user core. Mar 13 00:37:15.000714 systemd[1704]: Queued start job for default target default.target. Mar 13 00:37:15.007108 systemd[1704]: Created slice app.slice - User Application Slice. Mar 13 00:37:15.007136 systemd[1704]: Reached target paths.target - Paths. Mar 13 00:37:15.007180 systemd[1704]: Reached target timers.target - Timers. Mar 13 00:37:15.008699 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:37:15.021028 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:37:15.021235 systemd[1704]: Reached target sockets.target - Sockets. Mar 13 00:37:15.021343 systemd[1704]: Reached target basic.target - Basic System. Mar 13 00:37:15.021470 systemd[1704]: Reached target default.target - Main User Target. Mar 13 00:37:15.021495 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:37:15.021604 systemd[1704]: Startup finished in 151ms. Mar 13 00:37:15.032014 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:37:15.115288 systemd[1]: Started sshd@1-172.234.197.95:22-68.220.241.50:39128.service - OpenSSH per-connection server daemon (68.220.241.50:39128). Mar 13 00:37:15.283764 sshd[1715]: Accepted publickey for core from 68.220.241.50 port 39128 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:37:15.284542 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:15.290179 systemd-logind[1525]: New session 2 of user core. Mar 13 00:37:15.299029 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:37:15.357472 sshd[1718]: Connection closed by 68.220.241.50 port 39128 Mar 13 00:37:15.359215 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:15.363437 systemd[1]: sshd@1-172.234.197.95:22-68.220.241.50:39128.service: Deactivated successfully. Mar 13 00:37:15.365420 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:37:15.366799 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:37:15.368320 systemd-logind[1525]: Removed session 2. Mar 13 00:37:15.389225 systemd[1]: Started sshd@2-172.234.197.95:22-68.220.241.50:39140.service - OpenSSH per-connection server daemon (68.220.241.50:39140). Mar 13 00:37:15.538074 sshd[1724]: Accepted publickey for core from 68.220.241.50 port 39140 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:37:15.540291 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:15.547119 systemd-logind[1525]: New session 3 of user core. Mar 13 00:37:15.554086 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:37:15.606953 sshd[1727]: Connection closed by 68.220.241.50 port 39140 Mar 13 00:37:15.609092 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:15.615525 systemd[1]: sshd@2-172.234.197.95:22-68.220.241.50:39140.service: Deactivated successfully. Mar 13 00:37:15.618403 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:37:15.619350 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:37:15.621652 systemd-logind[1525]: Removed session 3. Mar 13 00:37:15.641382 systemd[1]: Started sshd@3-172.234.197.95:22-68.220.241.50:39154.service - OpenSSH per-connection server daemon (68.220.241.50:39154). Mar 13 00:37:15.794703 sshd[1733]: Accepted publickey for core from 68.220.241.50 port 39154 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:37:15.796560 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:15.803494 systemd-logind[1525]: New session 4 of user core. Mar 13 00:37:15.808041 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:37:15.865333 sshd[1736]: Connection closed by 68.220.241.50 port 39154 Mar 13 00:37:15.867074 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:15.870506 systemd[1]: sshd@3-172.234.197.95:22-68.220.241.50:39154.service: Deactivated successfully. Mar 13 00:37:15.872941 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:37:15.875327 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:37:15.876622 systemd-logind[1525]: Removed session 4. Mar 13 00:37:15.895864 systemd[1]: Started sshd@4-172.234.197.95:22-68.220.241.50:39162.service - OpenSSH per-connection server daemon (68.220.241.50:39162). Mar 13 00:37:16.046016 sshd[1742]: Accepted publickey for core from 68.220.241.50 port 39162 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:37:16.048308 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:16.056231 systemd-logind[1525]: New session 5 of user core. Mar 13 00:37:16.067131 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:37:16.112826 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:37:16.113188 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:37:16.133539 sudo[1746]: pam_unix(sudo:session): session closed for user root Mar 13 00:37:16.154599 sshd[1745]: Connection closed by 68.220.241.50 port 39162 Mar 13 00:37:16.156440 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:16.163344 systemd[1]: sshd@4-172.234.197.95:22-68.220.241.50:39162.service: Deactivated successfully. Mar 13 00:37:16.166633 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:37:16.168256 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:37:16.171846 systemd-logind[1525]: Removed session 5. Mar 13 00:37:16.191248 systemd[1]: Started sshd@5-172.234.197.95:22-68.220.241.50:39172.service - OpenSSH per-connection server daemon (68.220.241.50:39172). Mar 13 00:37:16.354106 sshd[1752]: Accepted publickey for core from 68.220.241.50 port 39172 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:37:16.356050 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:16.362388 systemd-logind[1525]: New session 6 of user core. Mar 13 00:37:16.365056 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:37:16.406026 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:37:16.406390 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:37:16.412039 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 13 00:37:16.420266 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:37:16.420613 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:37:16.433656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:37:16.482082 augenrules[1779]: No rules Mar 13 00:37:16.484277 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:37:16.484827 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:37:16.487266 sudo[1756]: pam_unix(sudo:session): session closed for user root Mar 13 00:37:16.510919 sshd[1755]: Connection closed by 68.220.241.50 port 39172 Mar 13 00:37:16.511485 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:16.515395 systemd[1]: sshd@5-172.234.197.95:22-68.220.241.50:39172.service: Deactivated successfully. Mar 13 00:37:16.517972 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:37:16.520615 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:37:16.522071 systemd-logind[1525]: Removed session 6. Mar 13 00:37:16.553527 systemd[1]: Started sshd@6-172.234.197.95:22-68.220.241.50:39184.service - OpenSSH per-connection server daemon (68.220.241.50:39184). Mar 13 00:37:16.743955 sshd[1788]: Accepted publickey for core from 68.220.241.50 port 39184 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:37:16.746436 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:16.752441 systemd-logind[1525]: New session 7 of user core. Mar 13 00:37:16.764199 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:37:16.804876 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:37:16.805244 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:37:17.097461 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:37:17.108251 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:37:17.337943 dockerd[1809]: time="2026-03-13T00:37:17.337161010Z" level=info msg="Starting up" Mar 13 00:37:17.339177 dockerd[1809]: time="2026-03-13T00:37:17.339144250Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:37:17.351280 dockerd[1809]: time="2026-03-13T00:37:17.351208460Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:37:17.372482 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2374793469-merged.mount: Deactivated successfully. Mar 13 00:37:17.430210 dockerd[1809]: time="2026-03-13T00:37:17.430151490Z" level=info msg="Loading containers: start." Mar 13 00:37:17.443918 kernel: Initializing XFRM netlink socket Mar 13 00:37:17.686623 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 13 00:37:17.702560 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 13 00:37:17.741557 systemd-networkd[1427]: docker0: Link UP Mar 13 00:37:17.742355 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 13 00:37:17.745261 dockerd[1809]: time="2026-03-13T00:37:17.745219890Z" level=info msg="Loading containers: done." Mar 13 00:37:17.759675 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4237349961-merged.mount: Deactivated successfully. Mar 13 00:37:17.760354 dockerd[1809]: time="2026-03-13T00:37:17.760324080Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:37:17.760408 dockerd[1809]: time="2026-03-13T00:37:17.760391130Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:37:17.760489 dockerd[1809]: time="2026-03-13T00:37:17.760470600Z" level=info msg="Initializing buildkit" Mar 13 00:37:17.782282 dockerd[1809]: time="2026-03-13T00:37:17.782261340Z" level=info msg="Completed buildkit initialization" Mar 13 00:37:17.789318 dockerd[1809]: time="2026-03-13T00:37:17.789300220Z" level=info msg="Daemon has completed initialization" Mar 13 00:37:17.789456 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:37:17.790926 dockerd[1809]: time="2026-03-13T00:37:17.790012440Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:37:18.261765 containerd[1547]: time="2026-03-13T00:37:18.261728270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 00:37:18.868942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697511532.mount: Deactivated successfully. Mar 13 00:37:20.130963 containerd[1547]: time="2026-03-13T00:37:20.130847560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:20.131858 containerd[1547]: time="2026-03-13T00:37:20.131807170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116192" Mar 13 00:37:20.133567 containerd[1547]: time="2026-03-13T00:37:20.133126690Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:20.135763 containerd[1547]: time="2026-03-13T00:37:20.135539370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:20.136625 containerd[1547]: time="2026-03-13T00:37:20.136585700Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.87481998s" Mar 13 00:37:20.136670 containerd[1547]: time="2026-03-13T00:37:20.136631030Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 13 00:37:20.138040 containerd[1547]: time="2026-03-13T00:37:20.138003770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 00:37:21.617926 containerd[1547]: time="2026-03-13T00:37:21.616337350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:21.620115 containerd[1547]: time="2026-03-13T00:37:21.620085640Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021816" Mar 13 00:37:21.620166 containerd[1547]: time="2026-03-13T00:37:21.620150670Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:21.622473 containerd[1547]: time="2026-03-13T00:37:21.622445830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:21.623576 containerd[1547]: time="2026-03-13T00:37:21.623553510Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.48537809s" Mar 13 00:37:21.623669 containerd[1547]: time="2026-03-13T00:37:21.623652450Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 13 00:37:21.624108 containerd[1547]: time="2026-03-13T00:37:21.624067470Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 00:37:22.803080 containerd[1547]: time="2026-03-13T00:37:22.803018320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:22.804166 containerd[1547]: time="2026-03-13T00:37:22.804001040Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162752" Mar 13 00:37:22.804845 containerd[1547]: time="2026-03-13T00:37:22.804821540Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:22.807150 containerd[1547]: time="2026-03-13T00:37:22.807130610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:22.807982 containerd[1547]: time="2026-03-13T00:37:22.807945870Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.1838531s" Mar 13 00:37:22.807982 containerd[1547]: time="2026-03-13T00:37:22.807975190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 13 00:37:22.808385 containerd[1547]: time="2026-03-13T00:37:22.808358300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 00:37:23.776653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:37:23.780527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:37:23.852694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601663477.mount: Deactivated successfully. Mar 13 00:37:24.001864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:37:24.011421 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:37:24.057216 kubelet[2101]: E0313 00:37:24.057111 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:37:24.063961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:37:24.064193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:37:24.065142 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.2M memory peak. Mar 13 00:37:24.332129 containerd[1547]: time="2026-03-13T00:37:24.331840750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:24.332954 containerd[1547]: time="2026-03-13T00:37:24.332748590Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828653" Mar 13 00:37:24.333478 containerd[1547]: time="2026-03-13T00:37:24.333451380Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:24.335089 containerd[1547]: time="2026-03-13T00:37:24.335064580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:24.335837 containerd[1547]: time="2026-03-13T00:37:24.335813460Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.52742915s" Mar 13 00:37:24.335926 containerd[1547]: time="2026-03-13T00:37:24.335910080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 13 00:37:24.336544 containerd[1547]: time="2026-03-13T00:37:24.336505530Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 00:37:24.843706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013775032.mount: Deactivated successfully. Mar 13 00:37:25.673927 containerd[1547]: time="2026-03-13T00:37:25.673846310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:25.675436 containerd[1547]: time="2026-03-13T00:37:25.674717330Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Mar 13 00:37:25.675436 containerd[1547]: time="2026-03-13T00:37:25.675404790Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:25.678619 containerd[1547]: time="2026-03-13T00:37:25.678594790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:25.679591 containerd[1547]: time="2026-03-13T00:37:25.679566240Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.34300674s" Mar 13 00:37:25.679694 containerd[1547]: time="2026-03-13T00:37:25.679662830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 13 00:37:25.681101 containerd[1547]: time="2026-03-13T00:37:25.680688650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:37:26.147475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256744135.mount: Deactivated successfully. Mar 13 00:37:26.153342 containerd[1547]: time="2026-03-13T00:37:26.153287500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:37:26.156367 containerd[1547]: time="2026-03-13T00:37:26.154178740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Mar 13 00:37:26.156948 containerd[1547]: time="2026-03-13T00:37:26.154793490Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:37:26.158129 containerd[1547]: time="2026-03-13T00:37:26.158109100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:37:26.158921 containerd[1547]: time="2026-03-13T00:37:26.158708230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 477.98366ms" Mar 13 00:37:26.158921 containerd[1547]: time="2026-03-13T00:37:26.158734350Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 13 00:37:26.159214 containerd[1547]: time="2026-03-13T00:37:26.159174810Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 00:37:26.806239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766047835.mount: Deactivated successfully. Mar 13 00:37:27.614854 containerd[1547]: time="2026-03-13T00:37:27.614599450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:27.615710 containerd[1547]: time="2026-03-13T00:37:27.615688580Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718846" Mar 13 00:37:27.616358 containerd[1547]: time="2026-03-13T00:37:27.616331530Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:27.619329 containerd[1547]: time="2026-03-13T00:37:27.619307150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:27.620492 containerd[1547]: time="2026-03-13T00:37:27.620245640Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.46104518s" Mar 13 00:37:27.620492 containerd[1547]: time="2026-03-13T00:37:27.620268800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 13 00:37:30.685289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:37:30.685443 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.2M memory peak. Mar 13 00:37:30.687683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:37:30.718009 systemd[1]: Reload requested from client PID 2255 ('systemctl') (unit session-7.scope)... Mar 13 00:37:30.718103 systemd[1]: Reloading... Mar 13 00:37:30.887058 zram_generator::config[2299]: No configuration found. Mar 13 00:37:31.114655 systemd[1]: Reloading finished in 396 ms. Mar 13 00:37:31.175213 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:37:31.175330 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:37:31.176006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:37:31.176067 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.3M memory peak. Mar 13 00:37:31.177646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:37:31.344824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:37:31.354263 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:37:31.389521 kubelet[2354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:37:31.389521 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:37:31.389521 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:37:31.389521 kubelet[2354]: I0313 00:37:31.389278 2354 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:37:32.123052 kubelet[2354]: I0313 00:37:32.123003 2354 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:37:32.123052 kubelet[2354]: I0313 00:37:32.123031 2354 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:37:32.123673 kubelet[2354]: I0313 00:37:32.123632 2354 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:37:32.172935 kubelet[2354]: E0313 00:37:32.172744 2354 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.197.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.197.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:37:32.176338 kubelet[2354]: I0313 00:37:32.176309 2354 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:37:32.184925 kubelet[2354]: I0313 00:37:32.183978 2354 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:37:32.189261 kubelet[2354]: I0313 00:37:32.189240 2354 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:37:32.190311 kubelet[2354]: I0313 00:37:32.190230 2354 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:37:32.190524 kubelet[2354]: I0313 00:37:32.190290 2354 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-197-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:37:32.190524 kubelet[2354]: I0313 00:37:32.190517 2354 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:37:32.190680 kubelet[2354]: I0313 00:37:32.190531 2354 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:37:32.190705 kubelet[2354]: I0313 00:37:32.190692 2354 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:37:32.197407 kubelet[2354]: I0313 00:37:32.197380 2354 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:37:32.197469 kubelet[2354]: I0313 00:37:32.197409 2354 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:37:32.197469 kubelet[2354]: I0313 00:37:32.197447 2354 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:37:32.197522 kubelet[2354]: I0313 00:37:32.197488 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:37:32.204356 kubelet[2354]: I0313 00:37:32.204312 2354 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:37:32.205074 kubelet[2354]: I0313 00:37:32.204972 2354 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:37:32.206682 kubelet[2354]: W0313 00:37:32.206643 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:37:32.207739 kubelet[2354]: E0313 00:37:32.207242 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.197.95:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-197-95&limit=500&resourceVersion=0\": dial tcp 172.234.197.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:37:32.207739 kubelet[2354]: E0313 00:37:32.207577 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.197.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.197.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:37:32.213989 kubelet[2354]: I0313 00:37:32.213960 2354 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:37:32.214086 kubelet[2354]: I0313 00:37:32.214070 2354 server.go:1289] "Started kubelet" Mar 13 00:37:32.216869 kubelet[2354]: I0313 00:37:32.216827 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:37:32.220910 kubelet[2354]: E0313 00:37:32.219166 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.197.95:6443/api/v1/namespaces/default/events\": dial tcp 172.234.197.95:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-197-95.189c3f929d4d1772 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-197-95,UID:172-234-197-95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-197-95,},FirstTimestamp:2026-03-13 00:37:32.21398309 +0000 UTC m=+0.855485521,LastTimestamp:2026-03-13 00:37:32.21398309 +0000 UTC m=+0.855485521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-197-95,}" Mar 13 00:37:32.220910 kubelet[2354]: I0313 00:37:32.220695 2354 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:37:32.222069 kubelet[2354]: I0313 00:37:32.222027 2354 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:37:32.226076 kubelet[2354]: I0313 00:37:32.226061 2354 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:37:32.226531 kubelet[2354]: E0313 00:37:32.226513 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-197-95\" not found" Mar 13 00:37:32.226729 kubelet[2354]: I0313 00:37:32.226692 2354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:37:32.227019 kubelet[2354]: I0313 00:37:32.227001 2354 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:37:32.227221 kubelet[2354]: I0313 00:37:32.227169 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:37:32.227710 kubelet[2354]: I0313 00:37:32.227602 2354 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:37:32.227866 kubelet[2354]: I0313 00:37:32.227686 2354 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:37:32.230208 kubelet[2354]: E0313 00:37:32.230186 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.197.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.197.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:37:32.231517 kubelet[2354]: E0313 00:37:32.231486 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.197.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-197-95?timeout=10s\": dial tcp 172.234.197.95:6443: connect: connection refused" interval="200ms" Mar 13 00:37:32.233297 kubelet[2354]: I0313 00:37:32.233036 2354 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:37:32.233297 kubelet[2354]: I0313 00:37:32.233163 2354 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:37:32.234359 kubelet[2354]: E0313 00:37:32.234324 2354 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:37:32.236279 kubelet[2354]: I0313 00:37:32.236244 2354 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:37:32.252459 kubelet[2354]: I0313 00:37:32.252399 2354 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:37:32.252459 kubelet[2354]: I0313 00:37:32.252420 2354 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:37:32.252459 kubelet[2354]: I0313 00:37:32.252453 2354 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:37:32.254993 kubelet[2354]: I0313 00:37:32.254927 2354 policy_none.go:49] "None policy: Start" Mar 13 00:37:32.254993 kubelet[2354]: I0313 00:37:32.254950 2354 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:37:32.254993 kubelet[2354]: I0313 00:37:32.254967 2354 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:37:32.257111 kubelet[2354]: I0313 00:37:32.257060 2354 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:37:32.260915 kubelet[2354]: I0313 00:37:32.260823 2354 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:37:32.261018 kubelet[2354]: I0313 00:37:32.261004 2354 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:37:32.261101 kubelet[2354]: I0313 00:37:32.261087 2354 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:37:32.262218 kubelet[2354]: I0313 00:37:32.262193 2354 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:37:32.262524 kubelet[2354]: E0313 00:37:32.262490 2354 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:37:32.263124 kubelet[2354]: E0313 00:37:32.263037 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.197.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.197.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:37:32.269155 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:37:32.282742 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:37:32.296772 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:37:32.299178 kubelet[2354]: E0313 00:37:32.299143 2354 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:37:32.299400 kubelet[2354]: I0313 00:37:32.299375 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:37:32.299457 kubelet[2354]: I0313 00:37:32.299396 2354 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:37:32.300206 kubelet[2354]: I0313 00:37:32.300068 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:37:32.302123 kubelet[2354]: E0313 00:37:32.302101 2354 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:37:32.302214 kubelet[2354]: E0313 00:37:32.302146 2354 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-197-95\" not found" Mar 13 00:37:32.377620 systemd[1]: Created slice kubepods-burstable-pod88513775143ecfcb74942a91a02584b6.slice - libcontainer container kubepods-burstable-pod88513775143ecfcb74942a91a02584b6.slice. Mar 13 00:37:32.401258 kubelet[2354]: I0313 00:37:32.401212 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-95" Mar 13 00:37:32.401717 kubelet[2354]: E0313 00:37:32.401660 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.197.95:6443/api/v1/nodes\": dial tcp 172.234.197.95:6443: connect: connection refused" node="172-234-197-95" Mar 13 00:37:32.404312 kubelet[2354]: E0313 00:37:32.404242 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:32.408167 systemd[1]: Created slice kubepods-burstable-pod9ee6a71c1bead963b481a911e43b3d80.slice - libcontainer container kubepods-burstable-pod9ee6a71c1bead963b481a911e43b3d80.slice. Mar 13 00:37:32.410485 kubelet[2354]: E0313 00:37:32.410432 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:32.413923 systemd[1]: Created slice kubepods-burstable-pod92233e5a0565c6332f403ce5fb37737d.slice - libcontainer container kubepods-burstable-pod92233e5a0565c6332f403ce5fb37737d.slice. Mar 13 00:37:32.415696 kubelet[2354]: E0313 00:37:32.415675 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:32.428822 kubelet[2354]: I0313 00:37:32.428783 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-flexvolume-dir\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:32.428822 kubelet[2354]: I0313 00:37:32.428829 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-k8s-certs\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:32.428995 kubelet[2354]: I0313 00:37:32.428858 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92233e5a0565c6332f403ce5fb37737d-kubeconfig\") pod \"kube-scheduler-172-234-197-95\" (UID: \"92233e5a0565c6332f403ce5fb37737d\") " pod="kube-system/kube-scheduler-172-234-197-95" Mar 13 00:37:32.428995 kubelet[2354]: I0313 00:37:32.428883 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88513775143ecfcb74942a91a02584b6-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-197-95\" (UID: \"88513775143ecfcb74942a91a02584b6\") " pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:32.428995 kubelet[2354]: I0313 00:37:32.428933 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-ca-certs\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:32.428995 kubelet[2354]: I0313 00:37:32.428956 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-kubeconfig\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:32.428995 kubelet[2354]: I0313 00:37:32.428979 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:32.429192 kubelet[2354]: I0313 00:37:32.429003 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88513775143ecfcb74942a91a02584b6-ca-certs\") pod \"kube-apiserver-172-234-197-95\" (UID: \"88513775143ecfcb74942a91a02584b6\") " pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:32.429192 kubelet[2354]: I0313 00:37:32.429025 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88513775143ecfcb74942a91a02584b6-k8s-certs\") pod \"kube-apiserver-172-234-197-95\" (UID: \"88513775143ecfcb74942a91a02584b6\") " pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:32.432336 kubelet[2354]: E0313 00:37:32.432298 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.197.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-197-95?timeout=10s\": dial tcp 172.234.197.95:6443: connect: connection refused" interval="400ms" Mar 13 00:37:32.604040 kubelet[2354]: I0313 00:37:32.604005 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-95" Mar 13 00:37:32.604441 kubelet[2354]: E0313 00:37:32.604418 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.197.95:6443/api/v1/nodes\": dial tcp 172.234.197.95:6443: connect: connection refused" node="172-234-197-95" Mar 13 00:37:32.705595 kubelet[2354]: E0313 00:37:32.705492 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:32.706821 containerd[1547]: time="2026-03-13T00:37:32.706136060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-197-95,Uid:88513775143ecfcb74942a91a02584b6,Namespace:kube-system,Attempt:0,}" Mar 13 00:37:32.711350 kubelet[2354]: E0313 00:37:32.711332 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:32.711929 containerd[1547]: time="2026-03-13T00:37:32.711698150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-197-95,Uid:9ee6a71c1bead963b481a911e43b3d80,Namespace:kube-system,Attempt:0,}" Mar 13 00:37:32.718136 kubelet[2354]: E0313 00:37:32.718113 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:32.719199 containerd[1547]: time="2026-03-13T00:37:32.719163610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-197-95,Uid:92233e5a0565c6332f403ce5fb37737d,Namespace:kube-system,Attempt:0,}" Mar 13 00:37:32.741452 containerd[1547]: time="2026-03-13T00:37:32.741250890Z" level=info msg="connecting to shim 629a5b9c9d456ea7ea6a239eaf329465f34d960a2eaadbe3180cdb598cede476" address="unix:///run/containerd/s/0893be7b5a061ae3eee5f744b3071b8928182df15e1b2add0362a57845cea1db" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:32.756943 containerd[1547]: time="2026-03-13T00:37:32.756387470Z" level=info msg="connecting to shim b86930dba79dcb9dadbf6cd7b7332790c02a033a779e097b5b34def8aa3de3d1" address="unix:///run/containerd/s/3fcb4d22abbb6075e2facae6253a73e8370334ca456695d3b6ab335b582f492f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:32.760022 containerd[1547]: time="2026-03-13T00:37:32.759990600Z" level=info msg="connecting to shim 3d1c3c2f67a8034152d47e61d69809e061cb302a7fb5d94102945e592da2f6b1" address="unix:///run/containerd/s/fba662c5f575c63eed82b0b277450cd28e9537447d25d6978619997d7296766d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:32.789188 systemd[1]: Started cri-containerd-629a5b9c9d456ea7ea6a239eaf329465f34d960a2eaadbe3180cdb598cede476.scope - libcontainer container 629a5b9c9d456ea7ea6a239eaf329465f34d960a2eaadbe3180cdb598cede476. Mar 13 00:37:32.805005 systemd[1]: Started cri-containerd-b86930dba79dcb9dadbf6cd7b7332790c02a033a779e097b5b34def8aa3de3d1.scope - libcontainer container b86930dba79dcb9dadbf6cd7b7332790c02a033a779e097b5b34def8aa3de3d1. Mar 13 00:37:32.810129 systemd[1]: Started cri-containerd-3d1c3c2f67a8034152d47e61d69809e061cb302a7fb5d94102945e592da2f6b1.scope - libcontainer container 3d1c3c2f67a8034152d47e61d69809e061cb302a7fb5d94102945e592da2f6b1. Mar 13 00:37:32.833601 kubelet[2354]: E0313 00:37:32.833560 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.197.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-197-95?timeout=10s\": dial tcp 172.234.197.95:6443: connect: connection refused" interval="800ms" Mar 13 00:37:32.866253 containerd[1547]: time="2026-03-13T00:37:32.866193920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-197-95,Uid:88513775143ecfcb74942a91a02584b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"629a5b9c9d456ea7ea6a239eaf329465f34d960a2eaadbe3180cdb598cede476\"" Mar 13 00:37:32.868380 kubelet[2354]: E0313 00:37:32.868355 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:32.881181 containerd[1547]: time="2026-03-13T00:37:32.881154700Z" level=info msg="CreateContainer within sandbox \"629a5b9c9d456ea7ea6a239eaf329465f34d960a2eaadbe3180cdb598cede476\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:37:32.896523 containerd[1547]: time="2026-03-13T00:37:32.896361250Z" level=info msg="Container 4d94e9900627bb99f6b0942e20d3e140d8a37ec041829fab6b8b3d10770e321e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:32.897330 containerd[1547]: time="2026-03-13T00:37:32.897311310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-197-95,Uid:9ee6a71c1bead963b481a911e43b3d80,Namespace:kube-system,Attempt:0,} returns sandbox id \"b86930dba79dcb9dadbf6cd7b7332790c02a033a779e097b5b34def8aa3de3d1\"" Mar 13 00:37:32.898883 kubelet[2354]: E0313 00:37:32.898857 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:32.908230 containerd[1547]: time="2026-03-13T00:37:32.908197170Z" level=info msg="CreateContainer within sandbox \"b86930dba79dcb9dadbf6cd7b7332790c02a033a779e097b5b34def8aa3de3d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:37:32.914236 containerd[1547]: time="2026-03-13T00:37:32.914165820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-197-95,Uid:92233e5a0565c6332f403ce5fb37737d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d1c3c2f67a8034152d47e61d69809e061cb302a7fb5d94102945e592da2f6b1\"" Mar 13 00:37:32.915397 kubelet[2354]: E0313 00:37:32.915247 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:32.921575 containerd[1547]: time="2026-03-13T00:37:32.921456360Z" level=info msg="CreateContainer within sandbox \"629a5b9c9d456ea7ea6a239eaf329465f34d960a2eaadbe3180cdb598cede476\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d94e9900627bb99f6b0942e20d3e140d8a37ec041829fab6b8b3d10770e321e\"" Mar 13 00:37:32.922987 containerd[1547]: time="2026-03-13T00:37:32.922901040Z" level=info msg="StartContainer for \"4d94e9900627bb99f6b0942e20d3e140d8a37ec041829fab6b8b3d10770e321e\"" Mar 13 00:37:32.924013 containerd[1547]: time="2026-03-13T00:37:32.923985950Z" level=info msg="connecting to shim 4d94e9900627bb99f6b0942e20d3e140d8a37ec041829fab6b8b3d10770e321e" address="unix:///run/containerd/s/0893be7b5a061ae3eee5f744b3071b8928182df15e1b2add0362a57845cea1db" protocol=ttrpc version=3 Mar 13 00:37:32.928111 containerd[1547]: time="2026-03-13T00:37:32.928075420Z" level=info msg="CreateContainer within sandbox \"3d1c3c2f67a8034152d47e61d69809e061cb302a7fb5d94102945e592da2f6b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:37:32.931259 containerd[1547]: time="2026-03-13T00:37:32.931228720Z" level=info msg="Container ed1cbc4aa7176650e9e5ae8b440db78998d5790d758a97562a73d74652c7d362: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:32.941204 containerd[1547]: time="2026-03-13T00:37:32.941094710Z" level=info msg="Container 8d0ec7b7b95dbd09524f37ca22b0674d0abdf8c8ec1aa39bf03d259760c3ca27: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:32.941819 containerd[1547]: time="2026-03-13T00:37:32.941798570Z" level=info msg="CreateContainer within sandbox \"b86930dba79dcb9dadbf6cd7b7332790c02a033a779e097b5b34def8aa3de3d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed1cbc4aa7176650e9e5ae8b440db78998d5790d758a97562a73d74652c7d362\"" Mar 13 00:37:32.943343 containerd[1547]: time="2026-03-13T00:37:32.943305170Z" level=info msg="StartContainer for \"ed1cbc4aa7176650e9e5ae8b440db78998d5790d758a97562a73d74652c7d362\"" Mar 13 00:37:32.946274 containerd[1547]: time="2026-03-13T00:37:32.946246400Z" level=info msg="CreateContainer within sandbox \"3d1c3c2f67a8034152d47e61d69809e061cb302a7fb5d94102945e592da2f6b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d0ec7b7b95dbd09524f37ca22b0674d0abdf8c8ec1aa39bf03d259760c3ca27\"" Mar 13 00:37:32.946826 containerd[1547]: time="2026-03-13T00:37:32.946809620Z" level=info msg="StartContainer for \"8d0ec7b7b95dbd09524f37ca22b0674d0abdf8c8ec1aa39bf03d259760c3ca27\"" Mar 13 00:37:32.947640 containerd[1547]: time="2026-03-13T00:37:32.947618570Z" level=info msg="connecting to shim 8d0ec7b7b95dbd09524f37ca22b0674d0abdf8c8ec1aa39bf03d259760c3ca27" address="unix:///run/containerd/s/fba662c5f575c63eed82b0b277450cd28e9537447d25d6978619997d7296766d" protocol=ttrpc version=3 Mar 13 00:37:32.948592 containerd[1547]: time="2026-03-13T00:37:32.948404190Z" level=info msg="connecting to shim ed1cbc4aa7176650e9e5ae8b440db78998d5790d758a97562a73d74652c7d362" address="unix:///run/containerd/s/3fcb4d22abbb6075e2facae6253a73e8370334ca456695d3b6ab335b582f492f" protocol=ttrpc version=3 Mar 13 00:37:32.958307 systemd[1]: Started cri-containerd-4d94e9900627bb99f6b0942e20d3e140d8a37ec041829fab6b8b3d10770e321e.scope - libcontainer container 4d94e9900627bb99f6b0942e20d3e140d8a37ec041829fab6b8b3d10770e321e. Mar 13 00:37:32.978223 systemd[1]: Started cri-containerd-ed1cbc4aa7176650e9e5ae8b440db78998d5790d758a97562a73d74652c7d362.scope - libcontainer container ed1cbc4aa7176650e9e5ae8b440db78998d5790d758a97562a73d74652c7d362. Mar 13 00:37:32.981301 systemd[1]: Started cri-containerd-8d0ec7b7b95dbd09524f37ca22b0674d0abdf8c8ec1aa39bf03d259760c3ca27.scope - libcontainer container 8d0ec7b7b95dbd09524f37ca22b0674d0abdf8c8ec1aa39bf03d259760c3ca27. Mar 13 00:37:33.008452 kubelet[2354]: I0313 00:37:33.008022 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-95" Mar 13 00:37:33.008759 kubelet[2354]: E0313 00:37:33.008739 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.197.95:6443/api/v1/nodes\": dial tcp 172.234.197.95:6443: connect: connection refused" node="172-234-197-95" Mar 13 00:37:33.050501 kubelet[2354]: E0313 00:37:33.050446 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.197.95:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-197-95&limit=500&resourceVersion=0\": dial tcp 172.234.197.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:37:33.065163 containerd[1547]: time="2026-03-13T00:37:33.065129780Z" level=info msg="StartContainer for \"8d0ec7b7b95dbd09524f37ca22b0674d0abdf8c8ec1aa39bf03d259760c3ca27\" returns successfully" Mar 13 00:37:33.071879 containerd[1547]: time="2026-03-13T00:37:33.071853030Z" level=info msg="StartContainer for \"4d94e9900627bb99f6b0942e20d3e140d8a37ec041829fab6b8b3d10770e321e\" returns successfully" Mar 13 00:37:33.098392 containerd[1547]: time="2026-03-13T00:37:33.098331670Z" level=info msg="StartContainer for \"ed1cbc4aa7176650e9e5ae8b440db78998d5790d758a97562a73d74652c7d362\" returns successfully" Mar 13 00:37:33.274739 kubelet[2354]: E0313 00:37:33.274657 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:33.276362 kubelet[2354]: E0313 00:37:33.275361 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:33.276362 kubelet[2354]: E0313 00:37:33.275532 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:33.276362 kubelet[2354]: E0313 00:37:33.275607 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:33.276909 kubelet[2354]: E0313 00:37:33.276876 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:33.277047 kubelet[2354]: E0313 00:37:33.277036 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:33.811613 kubelet[2354]: I0313 00:37:33.811561 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-95" Mar 13 00:37:34.279201 kubelet[2354]: E0313 00:37:34.279164 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:34.279414 kubelet[2354]: E0313 00:37:34.279399 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:34.280102 kubelet[2354]: E0313 00:37:34.279602 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:34.280258 kubelet[2354]: E0313 00:37:34.280245 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:34.491015 kubelet[2354]: E0313 00:37:34.490946 2354 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-197-95\" not found" node="172-234-197-95" Mar 13 00:37:34.599370 kubelet[2354]: I0313 00:37:34.599264 2354 kubelet_node_status.go:78] "Successfully registered node" node="172-234-197-95" Mar 13 00:37:34.599370 kubelet[2354]: E0313 00:37:34.599292 2354 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-197-95\": node \"172-234-197-95\" not found" Mar 13 00:37:34.624249 kubelet[2354]: E0313 00:37:34.624221 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-197-95\" not found" Mar 13 00:37:34.727641 kubelet[2354]: I0313 00:37:34.727600 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-197-95" Mar 13 00:37:34.735929 kubelet[2354]: E0313 00:37:34.735905 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-197-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-197-95" Mar 13 00:37:34.735929 kubelet[2354]: I0313 00:37:34.735930 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:34.736904 kubelet[2354]: E0313 00:37:34.736851 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-197-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:34.736904 kubelet[2354]: I0313 00:37:34.736870 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:34.737973 kubelet[2354]: E0313 00:37:34.737952 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-197-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:35.201578 kubelet[2354]: I0313 00:37:35.201547 2354 apiserver.go:52] "Watching apiserver" Mar 13 00:37:35.227921 kubelet[2354]: I0313 00:37:35.227856 2354 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:37:35.279901 kubelet[2354]: I0313 00:37:35.279336 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-197-95" Mar 13 00:37:35.281016 kubelet[2354]: E0313 00:37:35.280991 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-197-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-197-95" Mar 13 00:37:35.281260 kubelet[2354]: E0313 00:37:35.281241 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:36.478574 systemd[1]: Reload requested from client PID 2629 ('systemctl') (unit session-7.scope)... Mar 13 00:37:36.478593 systemd[1]: Reloading... Mar 13 00:37:36.582914 zram_generator::config[2682]: No configuration found. Mar 13 00:37:36.804260 systemd[1]: Reloading finished in 325 ms. Mar 13 00:37:36.837793 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:37:36.855748 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:37:36.856036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:37:36.856081 systemd[1]: kubelet.service: Consumed 1.234s CPU time, 132M memory peak. Mar 13 00:37:36.858491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:37:37.018532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:37:37.028297 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:37:37.062314 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:37:37.062314 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:37:37.062314 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:37:37.062640 kubelet[2724]: I0313 00:37:37.062596 2724 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:37:37.071124 kubelet[2724]: I0313 00:37:37.071095 2724 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:37:37.071124 kubelet[2724]: I0313 00:37:37.071117 2724 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:37:37.071315 kubelet[2724]: I0313 00:37:37.071293 2724 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:37:37.072680 kubelet[2724]: I0313 00:37:37.072658 2724 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:37:37.077828 kubelet[2724]: I0313 00:37:37.077128 2724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:37:37.082096 kubelet[2724]: I0313 00:37:37.082079 2724 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:37:37.087061 kubelet[2724]: I0313 00:37:37.087032 2724 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:37:37.088969 kubelet[2724]: I0313 00:37:37.087287 2724 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:37:37.088969 kubelet[2724]: I0313 00:37:37.087322 2724 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-197-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:37:37.088969 kubelet[2724]: I0313 00:37:37.087577 2724 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:37:37.088969 kubelet[2724]: I0313 00:37:37.087589 2724 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:37:37.088969 kubelet[2724]: I0313 00:37:37.087647 2724 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:37:37.089134 kubelet[2724]: I0313 00:37:37.087835 2724 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:37:37.089134 kubelet[2724]: I0313 00:37:37.087849 2724 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:37:37.089134 kubelet[2724]: I0313 00:37:37.087882 2724 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:37:37.089134 kubelet[2724]: I0313 00:37:37.087921 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:37:37.092046 kubelet[2724]: I0313 00:37:37.092015 2724 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:37:37.092433 kubelet[2724]: I0313 00:37:37.092409 2724 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:37:37.100019 kubelet[2724]: I0313 00:37:37.099630 2724 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:37:37.100115 kubelet[2724]: I0313 00:37:37.100104 2724 server.go:1289] "Started kubelet" Mar 13 00:37:37.103221 kubelet[2724]: I0313 00:37:37.103207 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:37:37.109508 kubelet[2724]: I0313 00:37:37.108962 2724 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:37:37.109617 kubelet[2724]: I0313 00:37:37.109595 2724 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:37:37.112762 kubelet[2724]: I0313 00:37:37.112723 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:37:37.112956 kubelet[2724]: I0313 00:37:37.112937 2724 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:37:37.113106 kubelet[2724]: I0313 00:37:37.113087 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:37:37.114230 kubelet[2724]: I0313 00:37:37.113735 2724 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:37:37.114230 kubelet[2724]: I0313 00:37:37.113801 2724 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:37:37.114230 kubelet[2724]: I0313 00:37:37.113924 2724 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:37:37.115413 kubelet[2724]: I0313 00:37:37.115390 2724 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:37:37.115484 kubelet[2724]: I0313 00:37:37.115464 2724 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:37:37.116941 kubelet[2724]: E0313 00:37:37.116919 2724 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:37:37.118396 kubelet[2724]: I0313 00:37:37.118370 2724 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:37:37.119425 kubelet[2724]: I0313 00:37:37.119408 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:37:37.120720 kubelet[2724]: I0313 00:37:37.120699 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:37:37.120813 kubelet[2724]: I0313 00:37:37.120801 2724 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:37:37.120918 kubelet[2724]: I0313 00:37:37.120873 2724 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:37:37.121369 kubelet[2724]: I0313 00:37:37.120966 2724 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:37:37.121369 kubelet[2724]: E0313 00:37:37.121012 2724 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:37:37.165697 kubelet[2724]: I0313 00:37:37.165677 2724 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:37:37.165827 kubelet[2724]: I0313 00:37:37.165815 2724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:37:37.165909 kubelet[2724]: I0313 00:37:37.165880 2724 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:37:37.166052 kubelet[2724]: I0313 00:37:37.166038 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:37:37.166118 kubelet[2724]: I0313 00:37:37.166100 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:37:37.166157 kubelet[2724]: I0313 00:37:37.166150 2724 policy_none.go:49] "None policy: Start" Mar 13 00:37:37.166202 kubelet[2724]: I0313 00:37:37.166194 2724 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:37:37.166243 kubelet[2724]: I0313 00:37:37.166236 2724 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:37:37.166352 kubelet[2724]: I0313 00:37:37.166342 2724 state_mem.go:75] "Updated machine memory state" Mar 13 00:37:37.170325 kubelet[2724]: E0313 00:37:37.170296 2724 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:37:37.170458 kubelet[2724]: I0313 00:37:37.170437 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:37:37.170486 kubelet[2724]: I0313 00:37:37.170453 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:37:37.171559 kubelet[2724]: I0313 00:37:37.171149 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:37:37.174386 kubelet[2724]: E0313 00:37:37.172727 2724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:37:37.221758 kubelet[2724]: I0313 00:37:37.221741 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:37.221844 kubelet[2724]: I0313 00:37:37.221823 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-197-95" Mar 13 00:37:37.222007 kubelet[2724]: I0313 00:37:37.221743 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:37.276698 kubelet[2724]: I0313 00:37:37.276675 2724 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-95" Mar 13 00:37:37.281493 kubelet[2724]: I0313 00:37:37.281469 2724 kubelet_node_status.go:124] "Node was previously registered" node="172-234-197-95" Mar 13 00:37:37.281549 kubelet[2724]: I0313 00:37:37.281541 2724 kubelet_node_status.go:78] "Successfully registered node" node="172-234-197-95" Mar 13 00:37:37.415957 kubelet[2724]: I0313 00:37:37.414769 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88513775143ecfcb74942a91a02584b6-k8s-certs\") pod \"kube-apiserver-172-234-197-95\" (UID: \"88513775143ecfcb74942a91a02584b6\") " pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:37.415957 kubelet[2724]: I0313 00:37:37.414795 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88513775143ecfcb74942a91a02584b6-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-197-95\" (UID: \"88513775143ecfcb74942a91a02584b6\") " pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:37.415957 kubelet[2724]: I0313 00:37:37.414812 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-kubeconfig\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:37.415957 kubelet[2724]: I0313 00:37:37.414827 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88513775143ecfcb74942a91a02584b6-ca-certs\") pod \"kube-apiserver-172-234-197-95\" (UID: \"88513775143ecfcb74942a91a02584b6\") " pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:37.415957 kubelet[2724]: I0313 00:37:37.414842 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-ca-certs\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:37.416116 kubelet[2724]: I0313 00:37:37.414856 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-flexvolume-dir\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:37.416116 kubelet[2724]: I0313 00:37:37.414872 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-k8s-certs\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:37.416116 kubelet[2724]: I0313 00:37:37.414910 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ee6a71c1bead963b481a911e43b3d80-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-197-95\" (UID: \"9ee6a71c1bead963b481a911e43b3d80\") " pod="kube-system/kube-controller-manager-172-234-197-95" Mar 13 00:37:37.416116 kubelet[2724]: I0313 00:37:37.414926 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92233e5a0565c6332f403ce5fb37737d-kubeconfig\") pod \"kube-scheduler-172-234-197-95\" (UID: \"92233e5a0565c6332f403ce5fb37737d\") " pod="kube-system/kube-scheduler-172-234-197-95" Mar 13 00:37:37.528022 kubelet[2724]: E0313 00:37:37.527729 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:37.528272 kubelet[2724]: E0313 00:37:37.528070 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:37.529612 kubelet[2724]: E0313 00:37:37.529557 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:38.094233 kubelet[2724]: I0313 00:37:38.093960 2724 apiserver.go:52] "Watching apiserver" Mar 13 00:37:38.114380 kubelet[2724]: I0313 00:37:38.114164 2724 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:37:38.153912 kubelet[2724]: E0313 00:37:38.151505 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:38.154274 kubelet[2724]: I0313 00:37:38.154258 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:38.154944 kubelet[2724]: E0313 00:37:38.154789 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:38.164352 kubelet[2724]: E0313 00:37:38.164296 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-197-95\" already exists" pod="kube-system/kube-apiserver-172-234-197-95" Mar 13 00:37:38.164658 kubelet[2724]: E0313 00:37:38.164642 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:38.200156 kubelet[2724]: I0313 00:37:38.200037 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-197-95" podStartSLOduration=1.20002153 podStartE2EDuration="1.20002153s" podCreationTimestamp="2026-03-13 00:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:37:38.19148692 +0000 UTC m=+1.158549481" watchObservedRunningTime="2026-03-13 00:37:38.20002153 +0000 UTC m=+1.167084081" Mar 13 00:37:38.201264 kubelet[2724]: I0313 00:37:38.200828 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-197-95" podStartSLOduration=1.2008207 podStartE2EDuration="1.2008207s" podCreationTimestamp="2026-03-13 00:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:37:38.20062153 +0000 UTC m=+1.167684081" watchObservedRunningTime="2026-03-13 00:37:38.2008207 +0000 UTC m=+1.167883271" Mar 13 00:37:38.219624 kubelet[2724]: I0313 00:37:38.219470 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-197-95" podStartSLOduration=1.21946067 podStartE2EDuration="1.21946067s" podCreationTimestamp="2026-03-13 00:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:37:38.20908267 +0000 UTC m=+1.176145221" watchObservedRunningTime="2026-03-13 00:37:38.21946067 +0000 UTC m=+1.186523221" Mar 13 00:37:39.153052 kubelet[2724]: E0313 00:37:39.153012 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:39.153561 kubelet[2724]: E0313 00:37:39.153534 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:40.154212 kubelet[2724]: E0313 00:37:40.154178 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:41.835523 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 00:37:42.781554 kubelet[2724]: E0313 00:37:42.781508 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:44.397228 kubelet[2724]: I0313 00:37:44.397181 2724 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:37:44.397944 containerd[1547]: time="2026-03-13T00:37:44.397837780Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:37:44.398269 kubelet[2724]: I0313 00:37:44.398144 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:37:45.370370 systemd[1]: Created slice kubepods-besteffort-pod213c7855_4549_4892_8b35_b36501edbebb.slice - libcontainer container kubepods-besteffort-pod213c7855_4549_4892_8b35_b36501edbebb.slice. Mar 13 00:37:45.464659 kubelet[2724]: I0313 00:37:45.464605 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/213c7855-4549-4892-8b35-b36501edbebb-kube-proxy\") pod \"kube-proxy-vq4wb\" (UID: \"213c7855-4549-4892-8b35-b36501edbebb\") " pod="kube-system/kube-proxy-vq4wb" Mar 13 00:37:45.464659 kubelet[2724]: I0313 00:37:45.464638 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/213c7855-4549-4892-8b35-b36501edbebb-lib-modules\") pod \"kube-proxy-vq4wb\" (UID: \"213c7855-4549-4892-8b35-b36501edbebb\") " pod="kube-system/kube-proxy-vq4wb" Mar 13 00:37:45.464659 kubelet[2724]: I0313 00:37:45.464659 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49l8\" (UniqueName: \"kubernetes.io/projected/213c7855-4549-4892-8b35-b36501edbebb-kube-api-access-z49l8\") pod \"kube-proxy-vq4wb\" (UID: \"213c7855-4549-4892-8b35-b36501edbebb\") " pod="kube-system/kube-proxy-vq4wb" Mar 13 00:37:45.465275 kubelet[2724]: I0313 00:37:45.464686 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/213c7855-4549-4892-8b35-b36501edbebb-xtables-lock\") pod \"kube-proxy-vq4wb\" (UID: \"213c7855-4549-4892-8b35-b36501edbebb\") " pod="kube-system/kube-proxy-vq4wb" Mar 13 00:37:45.521369 systemd[1]: Created slice kubepods-besteffort-pod5f110e5e_4d48_480a_a58b_ec1218894a10.slice - libcontainer container kubepods-besteffort-pod5f110e5e_4d48_480a_a58b_ec1218894a10.slice. Mar 13 00:37:45.565369 kubelet[2724]: I0313 00:37:45.565321 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8v9n\" (UniqueName: \"kubernetes.io/projected/5f110e5e-4d48-480a-a58b-ec1218894a10-kube-api-access-t8v9n\") pod \"tigera-operator-6bf85f8dd-cgtg9\" (UID: \"5f110e5e-4d48-480a-a58b-ec1218894a10\") " pod="tigera-operator/tigera-operator-6bf85f8dd-cgtg9" Mar 13 00:37:45.565369 kubelet[2724]: I0313 00:37:45.565385 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f110e5e-4d48-480a-a58b-ec1218894a10-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-cgtg9\" (UID: \"5f110e5e-4d48-480a-a58b-ec1218894a10\") " pod="tigera-operator/tigera-operator-6bf85f8dd-cgtg9" Mar 13 00:37:45.679662 kubelet[2724]: E0313 00:37:45.679516 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:45.680702 containerd[1547]: time="2026-03-13T00:37:45.680651600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vq4wb,Uid:213c7855-4549-4892-8b35-b36501edbebb,Namespace:kube-system,Attempt:0,}" Mar 13 00:37:45.700319 containerd[1547]: time="2026-03-13T00:37:45.700248880Z" level=info msg="connecting to shim 31b1272cc47e2aa8ffc9922c2cc22ee9a0c3fe1943e7ce02b2a3ec37d270c2cc" address="unix:///run/containerd/s/edd2ce34c7be662339b7347ff5f9a62ced9ea668d9f5a2e63278661918904d87" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:45.732051 systemd[1]: Started cri-containerd-31b1272cc47e2aa8ffc9922c2cc22ee9a0c3fe1943e7ce02b2a3ec37d270c2cc.scope - libcontainer container 31b1272cc47e2aa8ffc9922c2cc22ee9a0c3fe1943e7ce02b2a3ec37d270c2cc. Mar 13 00:37:45.765524 containerd[1547]: time="2026-03-13T00:37:45.765484880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vq4wb,Uid:213c7855-4549-4892-8b35-b36501edbebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"31b1272cc47e2aa8ffc9922c2cc22ee9a0c3fe1943e7ce02b2a3ec37d270c2cc\"" Mar 13 00:37:45.766600 kubelet[2724]: E0313 00:37:45.766547 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:45.773245 containerd[1547]: time="2026-03-13T00:37:45.773217100Z" level=info msg="CreateContainer within sandbox \"31b1272cc47e2aa8ffc9922c2cc22ee9a0c3fe1943e7ce02b2a3ec37d270c2cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:37:45.789597 containerd[1547]: time="2026-03-13T00:37:45.788371030Z" level=info msg="Container 871c5af1a2ae7591953e9e537468358e7e3ee153351e695bb312aa762716173a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:45.794566 containerd[1547]: time="2026-03-13T00:37:45.794540220Z" level=info msg="CreateContainer within sandbox \"31b1272cc47e2aa8ffc9922c2cc22ee9a0c3fe1943e7ce02b2a3ec37d270c2cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"871c5af1a2ae7591953e9e537468358e7e3ee153351e695bb312aa762716173a\"" Mar 13 00:37:45.795430 containerd[1547]: time="2026-03-13T00:37:45.795390520Z" level=info msg="StartContainer for \"871c5af1a2ae7591953e9e537468358e7e3ee153351e695bb312aa762716173a\"" Mar 13 00:37:45.796990 containerd[1547]: time="2026-03-13T00:37:45.796948400Z" level=info msg="connecting to shim 871c5af1a2ae7591953e9e537468358e7e3ee153351e695bb312aa762716173a" address="unix:///run/containerd/s/edd2ce34c7be662339b7347ff5f9a62ced9ea668d9f5a2e63278661918904d87" protocol=ttrpc version=3 Mar 13 00:37:45.822750 systemd[1]: Started cri-containerd-871c5af1a2ae7591953e9e537468358e7e3ee153351e695bb312aa762716173a.scope - libcontainer container 871c5af1a2ae7591953e9e537468358e7e3ee153351e695bb312aa762716173a. Mar 13 00:37:45.827819 containerd[1547]: time="2026-03-13T00:37:45.827753010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-cgtg9,Uid:5f110e5e-4d48-480a-a58b-ec1218894a10,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:37:45.846108 containerd[1547]: time="2026-03-13T00:37:45.845917680Z" level=info msg="connecting to shim b3e9725f694ae86df857ade73717b0fb331451606146854c682c2956d1098f60" address="unix:///run/containerd/s/97e1e5a1c23530022f999557403c8f250d2572486ef3caf25a217ae0900cddf5" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:45.884099 systemd[1]: Started cri-containerd-b3e9725f694ae86df857ade73717b0fb331451606146854c682c2956d1098f60.scope - libcontainer container b3e9725f694ae86df857ade73717b0fb331451606146854c682c2956d1098f60. Mar 13 00:37:45.917358 containerd[1547]: time="2026-03-13T00:37:45.917311800Z" level=info msg="StartContainer for \"871c5af1a2ae7591953e9e537468358e7e3ee153351e695bb312aa762716173a\" returns successfully" Mar 13 00:37:45.947102 containerd[1547]: time="2026-03-13T00:37:45.946958640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-cgtg9,Uid:5f110e5e-4d48-480a-a58b-ec1218894a10,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3e9725f694ae86df857ade73717b0fb331451606146854c682c2956d1098f60\"" Mar 13 00:37:45.950833 containerd[1547]: time="2026-03-13T00:37:45.950785200Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:37:46.165055 kubelet[2724]: E0313 00:37:46.165028 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:46.648426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521214158.mount: Deactivated successfully. Mar 13 00:37:47.533267 kubelet[2724]: E0313 00:37:47.532996 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:47.553980 kubelet[2724]: I0313 00:37:47.553559 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vq4wb" podStartSLOduration=2.55354118 podStartE2EDuration="2.55354118s" podCreationTimestamp="2026-03-13 00:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:37:46.17329727 +0000 UTC m=+9.140359821" watchObservedRunningTime="2026-03-13 00:37:47.55354118 +0000 UTC m=+10.520603731" Mar 13 00:37:47.687851 containerd[1547]: time="2026-03-13T00:37:47.687017330Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:47.687851 containerd[1547]: time="2026-03-13T00:37:47.687801870Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:37:47.689294 containerd[1547]: time="2026-03-13T00:37:47.689182360Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:47.691914 containerd[1547]: time="2026-03-13T00:37:47.690738720Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:47.692282 containerd[1547]: time="2026-03-13T00:37:47.691345840Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.74050873s" Mar 13 00:37:47.692334 containerd[1547]: time="2026-03-13T00:37:47.692285010Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:37:47.697685 containerd[1547]: time="2026-03-13T00:37:47.697643040Z" level=info msg="CreateContainer within sandbox \"b3e9725f694ae86df857ade73717b0fb331451606146854c682c2956d1098f60\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:37:47.707993 containerd[1547]: time="2026-03-13T00:37:47.706465760Z" level=info msg="Container 8066605fa607be9512bb1b3c404655e8fe719f0d25d80832bc2e25295e17f6f9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:47.722559 containerd[1547]: time="2026-03-13T00:37:47.721930460Z" level=info msg="CreateContainer within sandbox \"b3e9725f694ae86df857ade73717b0fb331451606146854c682c2956d1098f60\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8066605fa607be9512bb1b3c404655e8fe719f0d25d80832bc2e25295e17f6f9\"" Mar 13 00:37:47.722763 containerd[1547]: time="2026-03-13T00:37:47.722743690Z" level=info msg="StartContainer for \"8066605fa607be9512bb1b3c404655e8fe719f0d25d80832bc2e25295e17f6f9\"" Mar 13 00:37:47.725627 containerd[1547]: time="2026-03-13T00:37:47.725513840Z" level=info msg="connecting to shim 8066605fa607be9512bb1b3c404655e8fe719f0d25d80832bc2e25295e17f6f9" address="unix:///run/containerd/s/97e1e5a1c23530022f999557403c8f250d2572486ef3caf25a217ae0900cddf5" protocol=ttrpc version=3 Mar 13 00:37:47.754021 systemd[1]: Started cri-containerd-8066605fa607be9512bb1b3c404655e8fe719f0d25d80832bc2e25295e17f6f9.scope - libcontainer container 8066605fa607be9512bb1b3c404655e8fe719f0d25d80832bc2e25295e17f6f9. Mar 13 00:37:47.795072 containerd[1547]: time="2026-03-13T00:37:47.793999800Z" level=info msg="StartContainer for \"8066605fa607be9512bb1b3c404655e8fe719f0d25d80832bc2e25295e17f6f9\" returns successfully" Mar 13 00:37:49.379426 systemd-timesyncd[1465]: Contacted time server [2602:fcc0:3334:7796:123:123:123:123]:123 (2.flatcar.pool.ntp.org). Mar 13 00:37:49.379480 systemd-resolved[1431]: Clock change detected. Flushing caches. Mar 13 00:37:49.379515 systemd-timesyncd[1465]: Initial clock synchronization to Fri 2026-03-13 00:37:49.379079 UTC. Mar 13 00:37:49.434293 kubelet[2724]: E0313 00:37:49.434191 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:49.444471 kubelet[2724]: I0313 00:37:49.444410 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-cgtg9" podStartSLOduration=2.698912308 podStartE2EDuration="4.444394838s" podCreationTimestamp="2026-03-13 00:37:45 +0000 UTC" firstStartedPulling="2026-03-13 00:37:45.94832402 +0000 UTC m=+8.915386571" lastFinishedPulling="2026-03-13 00:37:47.69380655 +0000 UTC m=+10.660869101" observedRunningTime="2026-03-13 00:37:49.443521058 +0000 UTC m=+11.148202771" watchObservedRunningTime="2026-03-13 00:37:49.444394838 +0000 UTC m=+11.149076551" Mar 13 00:37:50.880988 kubelet[2724]: E0313 00:37:50.880944 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:51.440402 kubelet[2724]: E0313 00:37:51.440331 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:54.048713 kubelet[2724]: E0313 00:37:54.048356 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:54.868885 sudo[1792]: pam_unix(sudo:session): session closed for user root Mar 13 00:37:54.896340 sshd[1791]: Connection closed by 68.220.241.50 port 39184 Mar 13 00:37:54.897341 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:54.902981 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:37:54.904792 systemd[1]: sshd@6-172.234.197.95:22-68.220.241.50:39184.service: Deactivated successfully. Mar 13 00:37:54.907961 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:37:54.908564 systemd[1]: session-7.scope: Consumed 5.074s CPU time, 232.9M memory peak. Mar 13 00:37:54.911156 systemd-logind[1525]: Removed session 7. Mar 13 00:37:57.358451 systemd[1]: Created slice kubepods-besteffort-pod0a74b794_1375_4f63_af51_63b72c59919e.slice - libcontainer container kubepods-besteffort-pod0a74b794_1375_4f63_af51_63b72c59919e.slice. Mar 13 00:37:57.401002 kubelet[2724]: I0313 00:37:57.400848 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxp94\" (UniqueName: \"kubernetes.io/projected/0a74b794-1375-4f63-af51-63b72c59919e-kube-api-access-nxp94\") pod \"calico-typha-c5fdcb779-2rzpt\" (UID: \"0a74b794-1375-4f63-af51-63b72c59919e\") " pod="calico-system/calico-typha-c5fdcb779-2rzpt" Mar 13 00:37:57.402072 kubelet[2724]: I0313 00:37:57.401848 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a74b794-1375-4f63-af51-63b72c59919e-tigera-ca-bundle\") pod \"calico-typha-c5fdcb779-2rzpt\" (UID: \"0a74b794-1375-4f63-af51-63b72c59919e\") " pod="calico-system/calico-typha-c5fdcb779-2rzpt" Mar 13 00:37:57.402072 kubelet[2724]: I0313 00:37:57.402018 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0a74b794-1375-4f63-af51-63b72c59919e-typha-certs\") pod \"calico-typha-c5fdcb779-2rzpt\" (UID: \"0a74b794-1375-4f63-af51-63b72c59919e\") " pod="calico-system/calico-typha-c5fdcb779-2rzpt" Mar 13 00:37:57.448869 systemd[1]: Created slice kubepods-besteffort-pod4e46285c_a6cf_48e0_9ef4_eca1494520e0.slice - libcontainer container kubepods-besteffort-pod4e46285c_a6cf_48e0_9ef4_eca1494520e0.slice. Mar 13 00:37:57.503054 kubelet[2724]: I0313 00:37:57.502998 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-sys-fs\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503054 kubelet[2724]: I0313 00:37:57.503041 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-policysync\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503054 kubelet[2724]: I0313 00:37:57.503062 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-cni-bin-dir\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503408 kubelet[2724]: I0313 00:37:57.503079 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-bpffs\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503408 kubelet[2724]: I0313 00:37:57.503098 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e46285c-a6cf-48e0-9ef4-eca1494520e0-tigera-ca-bundle\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503408 kubelet[2724]: I0313 00:37:57.503118 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-var-lib-calico\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503408 kubelet[2724]: I0313 00:37:57.503135 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-var-run-calico\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503408 kubelet[2724]: I0313 00:37:57.503151 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-xtables-lock\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503586 kubelet[2724]: I0313 00:37:57.503169 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4e46285c-a6cf-48e0-9ef4-eca1494520e0-node-certs\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503586 kubelet[2724]: I0313 00:37:57.503184 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-nodeproc\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503586 kubelet[2724]: I0313 00:37:57.503212 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-cni-net-dir\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503586 kubelet[2724]: I0313 00:37:57.503227 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-lib-modules\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503586 kubelet[2724]: I0313 00:37:57.503243 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjfzs\" (UniqueName: \"kubernetes.io/projected/4e46285c-a6cf-48e0-9ef4-eca1494520e0-kube-api-access-gjfzs\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503717 kubelet[2724]: I0313 00:37:57.503320 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-cni-log-dir\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.503717 kubelet[2724]: I0313 00:37:57.503351 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4e46285c-a6cf-48e0-9ef4-eca1494520e0-flexvol-driver-host\") pod \"calico-node-f6dz7\" (UID: \"4e46285c-a6cf-48e0-9ef4-eca1494520e0\") " pod="calico-system/calico-node-f6dz7" Mar 13 00:37:57.565699 kubelet[2724]: E0313 00:37:57.565592 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npsmc" podUID="53df9c22-fa7e-4b17-8b53-e9ef874e7bac" Mar 13 00:37:57.605192 kubelet[2724]: I0313 00:37:57.604338 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53df9c22-fa7e-4b17-8b53-e9ef874e7bac-socket-dir\") pod \"csi-node-driver-npsmc\" (UID: \"53df9c22-fa7e-4b17-8b53-e9ef874e7bac\") " pod="calico-system/csi-node-driver-npsmc" Mar 13 00:37:57.605192 kubelet[2724]: I0313 00:37:57.604414 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/53df9c22-fa7e-4b17-8b53-e9ef874e7bac-varrun\") pod \"csi-node-driver-npsmc\" (UID: \"53df9c22-fa7e-4b17-8b53-e9ef874e7bac\") " pod="calico-system/csi-node-driver-npsmc" Mar 13 00:37:57.605192 kubelet[2724]: I0313 00:37:57.604433 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54dfq\" (UniqueName: \"kubernetes.io/projected/53df9c22-fa7e-4b17-8b53-e9ef874e7bac-kube-api-access-54dfq\") pod \"csi-node-driver-npsmc\" (UID: \"53df9c22-fa7e-4b17-8b53-e9ef874e7bac\") " pod="calico-system/csi-node-driver-npsmc" Mar 13 00:37:57.605192 kubelet[2724]: I0313 00:37:57.604500 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53df9c22-fa7e-4b17-8b53-e9ef874e7bac-kubelet-dir\") pod \"csi-node-driver-npsmc\" (UID: \"53df9c22-fa7e-4b17-8b53-e9ef874e7bac\") " pod="calico-system/csi-node-driver-npsmc" Mar 13 00:37:57.605192 kubelet[2724]: I0313 00:37:57.604518 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53df9c22-fa7e-4b17-8b53-e9ef874e7bac-registration-dir\") pod \"csi-node-driver-npsmc\" (UID: \"53df9c22-fa7e-4b17-8b53-e9ef874e7bac\") " pod="calico-system/csi-node-driver-npsmc" Mar 13 00:37:57.610654 kubelet[2724]: E0313 00:37:57.609980 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.610805 kubelet[2724]: W0313 00:37:57.610789 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.610895 kubelet[2724]: E0313 00:37:57.610883 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.612843 kubelet[2724]: E0313 00:37:57.612831 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.612922 kubelet[2724]: W0313 00:37:57.612911 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.612999 kubelet[2724]: E0313 00:37:57.612969 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.626554 kubelet[2724]: E0313 00:37:57.626541 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.626636 kubelet[2724]: W0313 00:37:57.626623 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.626717 kubelet[2724]: E0313 00:37:57.626705 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.665876 kubelet[2724]: E0313 00:37:57.665850 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:57.666880 containerd[1547]: time="2026-03-13T00:37:57.666774508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c5fdcb779-2rzpt,Uid:0a74b794-1375-4f63-af51-63b72c59919e,Namespace:calico-system,Attempt:0,}" Mar 13 00:37:57.699324 containerd[1547]: time="2026-03-13T00:37:57.699236298Z" level=info msg="connecting to shim f0d77131c16429f8582274ced09b45cd488ddf1bc642f2f0cd0080d2681aa184" address="unix:///run/containerd/s/da7d1d22b4ddcddb9d4922dd58a2f2755188d150e3dd34c0bfa21e0f825cefcb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:57.706968 kubelet[2724]: E0313 00:37:57.706864 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.706968 kubelet[2724]: W0313 00:37:57.706918 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.706968 kubelet[2724]: E0313 00:37:57.706943 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.708573 kubelet[2724]: E0313 00:37:57.708537 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.708573 kubelet[2724]: W0313 00:37:57.708550 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.708757 kubelet[2724]: E0313 00:37:57.708667 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.709238 kubelet[2724]: E0313 00:37:57.709182 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.709238 kubelet[2724]: W0313 00:37:57.709192 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.709238 kubelet[2724]: E0313 00:37:57.709202 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.710683 kubelet[2724]: E0313 00:37:57.710671 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.710830 kubelet[2724]: W0313 00:37:57.710740 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.710830 kubelet[2724]: E0313 00:37:57.710752 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.711775 kubelet[2724]: E0313 00:37:57.711494 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.711775 kubelet[2724]: W0313 00:37:57.711504 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.711775 kubelet[2724]: E0313 00:37:57.711513 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.711775 kubelet[2724]: E0313 00:37:57.711740 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.712139 kubelet[2724]: W0313 00:37:57.711750 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.712139 kubelet[2724]: E0313 00:37:57.712029 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.712571 kubelet[2724]: E0313 00:37:57.712559 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.712653 kubelet[2724]: W0313 00:37:57.712641 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.713468 kubelet[2724]: E0313 00:37:57.713319 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.713637 kubelet[2724]: E0313 00:37:57.713626 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.713709 kubelet[2724]: W0313 00:37:57.713693 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.713803 kubelet[2724]: E0313 00:37:57.713790 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.714420 kubelet[2724]: E0313 00:37:57.714392 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.716340 kubelet[2724]: W0313 00:37:57.716309 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.716340 kubelet[2724]: E0313 00:37:57.716327 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.716927 kubelet[2724]: E0313 00:37:57.716886 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.716927 kubelet[2724]: W0313 00:37:57.716917 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.716983 kubelet[2724]: E0313 00:37:57.716944 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.717652 kubelet[2724]: E0313 00:37:57.717628 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.717652 kubelet[2724]: W0313 00:37:57.717646 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.717713 kubelet[2724]: E0313 00:37:57.717656 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.719385 kubelet[2724]: E0313 00:37:57.719355 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.719385 kubelet[2724]: W0313 00:37:57.719375 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.719385 kubelet[2724]: E0313 00:37:57.719386 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.720179 kubelet[2724]: E0313 00:37:57.720149 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.720179 kubelet[2724]: W0313 00:37:57.720169 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.720254 kubelet[2724]: E0313 00:37:57.720183 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.720736 kubelet[2724]: E0313 00:37:57.720670 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.720736 kubelet[2724]: W0313 00:37:57.720689 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.720736 kubelet[2724]: E0313 00:37:57.720700 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.721561 kubelet[2724]: E0313 00:37:57.721534 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.721561 kubelet[2724]: W0313 00:37:57.721551 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.721561 kubelet[2724]: E0313 00:37:57.721560 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.723224 kubelet[2724]: E0313 00:37:57.723191 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.723224 kubelet[2724]: W0313 00:37:57.723211 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.723224 kubelet[2724]: E0313 00:37:57.723221 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.723476 kubelet[2724]: E0313 00:37:57.723452 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.723476 kubelet[2724]: W0313 00:37:57.723468 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.723476 kubelet[2724]: E0313 00:37:57.723477 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.724112 kubelet[2724]: E0313 00:37:57.724088 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.724866 kubelet[2724]: W0313 00:37:57.724845 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.724941 kubelet[2724]: E0313 00:37:57.724928 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.725399 kubelet[2724]: E0313 00:37:57.725300 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.725399 kubelet[2724]: W0313 00:37:57.725311 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.725399 kubelet[2724]: E0313 00:37:57.725322 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.725613 kubelet[2724]: E0313 00:37:57.725602 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.725661 kubelet[2724]: W0313 00:37:57.725651 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.725723 kubelet[2724]: E0313 00:37:57.725699 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.727427 kubelet[2724]: E0313 00:37:57.727393 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.727427 kubelet[2724]: W0313 00:37:57.727417 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.727427 kubelet[2724]: E0313 00:37:57.727430 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.727723 kubelet[2724]: E0313 00:37:57.727696 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.727723 kubelet[2724]: W0313 00:37:57.727715 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.727723 kubelet[2724]: E0313 00:37:57.727723 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.728094 kubelet[2724]: E0313 00:37:57.728068 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.728094 kubelet[2724]: W0313 00:37:57.728086 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.728094 kubelet[2724]: E0313 00:37:57.728095 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.729530 kubelet[2724]: E0313 00:37:57.729496 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.729530 kubelet[2724]: W0313 00:37:57.729518 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.729530 kubelet[2724]: E0313 00:37:57.729529 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.730015 kubelet[2724]: E0313 00:37:57.729986 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.730015 kubelet[2724]: W0313 00:37:57.730008 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.730015 kubelet[2724]: E0313 00:37:57.730018 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.736420 update_engine[1527]: I20260313 00:37:57.736321 1527 update_attempter.cc:509] Updating boot flags... Mar 13 00:37:57.756250 kubelet[2724]: E0313 00:37:57.755469 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:37:57.756250 kubelet[2724]: W0313 00:37:57.755487 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:37:57.756250 kubelet[2724]: E0313 00:37:57.755500 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:37:57.760250 containerd[1547]: time="2026-03-13T00:37:57.756449828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6dz7,Uid:4e46285c-a6cf-48e0-9ef4-eca1494520e0,Namespace:calico-system,Attempt:0,}" Mar 13 00:37:57.770861 systemd[1]: Started cri-containerd-f0d77131c16429f8582274ced09b45cd488ddf1bc642f2f0cd0080d2681aa184.scope - libcontainer container f0d77131c16429f8582274ced09b45cd488ddf1bc642f2f0cd0080d2681aa184. Mar 13 00:37:57.819401 containerd[1547]: time="2026-03-13T00:37:57.818572618Z" level=info msg="connecting to shim de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b" address="unix:///run/containerd/s/3bc53d56022d46318163cec6733e25022627e96000046f91b7385ec484cb0a8e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:57.913415 systemd[1]: Started cri-containerd-de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b.scope - libcontainer container de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b. Mar 13 00:37:58.141223 containerd[1547]: time="2026-03-13T00:37:58.141189548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6dz7,Uid:4e46285c-a6cf-48e0-9ef4-eca1494520e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\"" Mar 13 00:37:58.147363 containerd[1547]: time="2026-03-13T00:37:58.146728938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:37:58.162039 containerd[1547]: time="2026-03-13T00:37:58.161900578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c5fdcb779-2rzpt,Uid:0a74b794-1375-4f63-af51-63b72c59919e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0d77131c16429f8582274ced09b45cd488ddf1bc642f2f0cd0080d2681aa184\"" Mar 13 00:37:58.165999 kubelet[2724]: E0313 00:37:58.164596 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:37:58.776164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308149013.mount: Deactivated successfully. Mar 13 00:37:58.847891 containerd[1547]: time="2026-03-13T00:37:58.847839938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:58.849257 containerd[1547]: time="2026-03-13T00:37:58.848950148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 13 00:37:58.850009 containerd[1547]: time="2026-03-13T00:37:58.849969798Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:58.851969 containerd[1547]: time="2026-03-13T00:37:58.851930448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:37:58.853075 containerd[1547]: time="2026-03-13T00:37:58.853040768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 706.27762ms" Mar 13 00:37:58.853162 containerd[1547]: time="2026-03-13T00:37:58.853144068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:37:58.855035 containerd[1547]: time="2026-03-13T00:37:58.854587218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:37:58.860133 containerd[1547]: time="2026-03-13T00:37:58.860089978Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:37:58.869317 containerd[1547]: time="2026-03-13T00:37:58.868003828Z" level=info msg="Container 708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:58.884858 containerd[1547]: time="2026-03-13T00:37:58.884786138Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab\"" Mar 13 00:37:58.885977 containerd[1547]: time="2026-03-13T00:37:58.885926008Z" level=info msg="StartContainer for \"708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab\"" Mar 13 00:37:58.889850 containerd[1547]: time="2026-03-13T00:37:58.889796898Z" level=info msg="connecting to shim 708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab" address="unix:///run/containerd/s/3bc53d56022d46318163cec6733e25022627e96000046f91b7385ec484cb0a8e" protocol=ttrpc version=3 Mar 13 00:37:58.925492 systemd[1]: Started cri-containerd-708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab.scope - libcontainer container 708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab. Mar 13 00:37:59.003997 containerd[1547]: time="2026-03-13T00:37:59.003939008Z" level=info msg="StartContainer for \"708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab\" returns successfully" Mar 13 00:37:59.022182 systemd[1]: cri-containerd-708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab.scope: Deactivated successfully. Mar 13 00:37:59.025511 containerd[1547]: time="2026-03-13T00:37:59.025430198Z" level=info msg="received container exit event container_id:\"708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab\" id:\"708644f8d01a375ba375511f4f1d2b293308da22cda9764e314823fd3ed3e3ab\" pid:3291 exited_at:{seconds:1773362279 nanos:24934578}" Mar 13 00:37:59.383832 kubelet[2724]: E0313 00:37:59.383782 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npsmc" podUID="53df9c22-fa7e-4b17-8b53-e9ef874e7bac" Mar 13 00:38:00.167361 containerd[1547]: time="2026-03-13T00:38:00.167292398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:00.168477 containerd[1547]: time="2026-03-13T00:38:00.168163228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 13 00:38:00.169057 containerd[1547]: time="2026-03-13T00:38:00.169014648Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:00.170901 containerd[1547]: time="2026-03-13T00:38:00.170866708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:00.171533 containerd[1547]: time="2026-03-13T00:38:00.171502018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.31688596s" Mar 13 00:38:00.171621 containerd[1547]: time="2026-03-13T00:38:00.171605798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:38:00.173958 containerd[1547]: time="2026-03-13T00:38:00.173887408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:38:00.193376 containerd[1547]: time="2026-03-13T00:38:00.193330598Z" level=info msg="CreateContainer within sandbox \"f0d77131c16429f8582274ced09b45cd488ddf1bc642f2f0cd0080d2681aa184\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:38:00.201143 containerd[1547]: time="2026-03-13T00:38:00.201076398Z" level=info msg="Container 62830cc58dbe140ac2fa4654f5152e115f87c47e19b5b7c1127bd621ae660ae5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:00.206997 containerd[1547]: time="2026-03-13T00:38:00.206919218Z" level=info msg="CreateContainer within sandbox \"f0d77131c16429f8582274ced09b45cd488ddf1bc642f2f0cd0080d2681aa184\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62830cc58dbe140ac2fa4654f5152e115f87c47e19b5b7c1127bd621ae660ae5\"" Mar 13 00:38:00.207713 containerd[1547]: time="2026-03-13T00:38:00.207667208Z" level=info msg="StartContainer for \"62830cc58dbe140ac2fa4654f5152e115f87c47e19b5b7c1127bd621ae660ae5\"" Mar 13 00:38:00.210331 containerd[1547]: time="2026-03-13T00:38:00.210288128Z" level=info msg="connecting to shim 62830cc58dbe140ac2fa4654f5152e115f87c47e19b5b7c1127bd621ae660ae5" address="unix:///run/containerd/s/da7d1d22b4ddcddb9d4922dd58a2f2755188d150e3dd34c0bfa21e0f825cefcb" protocol=ttrpc version=3 Mar 13 00:38:00.236422 systemd[1]: Started cri-containerd-62830cc58dbe140ac2fa4654f5152e115f87c47e19b5b7c1127bd621ae660ae5.scope - libcontainer container 62830cc58dbe140ac2fa4654f5152e115f87c47e19b5b7c1127bd621ae660ae5. Mar 13 00:38:00.289333 containerd[1547]: time="2026-03-13T00:38:00.289253848Z" level=info msg="StartContainer for \"62830cc58dbe140ac2fa4654f5152e115f87c47e19b5b7c1127bd621ae660ae5\" returns successfully" Mar 13 00:38:00.471092 kubelet[2724]: E0313 00:38:00.470901 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:01.384591 kubelet[2724]: E0313 00:38:01.384406 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npsmc" podUID="53df9c22-fa7e-4b17-8b53-e9ef874e7bac" Mar 13 00:38:01.472297 kubelet[2724]: E0313 00:38:01.471840 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:01.486034 kubelet[2724]: I0313 00:38:01.485980 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c5fdcb779-2rzpt" podStartSLOduration=2.479977118 podStartE2EDuration="4.485958478s" podCreationTimestamp="2026-03-13 00:37:57 +0000 UTC" firstStartedPulling="2026-03-13 00:37:58.167388038 +0000 UTC m=+19.872069751" lastFinishedPulling="2026-03-13 00:38:00.173369398 +0000 UTC m=+21.878051111" observedRunningTime="2026-03-13 00:38:00.484485958 +0000 UTC m=+22.189167671" watchObservedRunningTime="2026-03-13 00:38:01.485958478 +0000 UTC m=+23.190640191" Mar 13 00:38:02.475839 kubelet[2724]: E0313 00:38:02.475646 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:03.384403 kubelet[2724]: E0313 00:38:03.384150 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npsmc" podUID="53df9c22-fa7e-4b17-8b53-e9ef874e7bac" Mar 13 00:38:04.160174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078709857.mount: Deactivated successfully. Mar 13 00:38:04.193190 containerd[1547]: time="2026-03-13T00:38:04.193143048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:04.194123 containerd[1547]: time="2026-03-13T00:38:04.193938038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:38:04.194692 containerd[1547]: time="2026-03-13T00:38:04.194660208Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:04.196971 containerd[1547]: time="2026-03-13T00:38:04.196948928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:04.198507 containerd[1547]: time="2026-03-13T00:38:04.198478428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.0243506s" Mar 13 00:38:04.198584 containerd[1547]: time="2026-03-13T00:38:04.198514748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:38:04.203074 containerd[1547]: time="2026-03-13T00:38:04.203045568Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:38:04.213292 containerd[1547]: time="2026-03-13T00:38:04.209880968Z" level=info msg="Container a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:04.218882 containerd[1547]: time="2026-03-13T00:38:04.218848308Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257\"" Mar 13 00:38:04.219450 containerd[1547]: time="2026-03-13T00:38:04.219408948Z" level=info msg="StartContainer for \"a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257\"" Mar 13 00:38:04.220993 containerd[1547]: time="2026-03-13T00:38:04.220962128Z" level=info msg="connecting to shim a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257" address="unix:///run/containerd/s/3bc53d56022d46318163cec6733e25022627e96000046f91b7385ec484cb0a8e" protocol=ttrpc version=3 Mar 13 00:38:04.246419 systemd[1]: Started cri-containerd-a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257.scope - libcontainer container a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257. Mar 13 00:38:04.317170 containerd[1547]: time="2026-03-13T00:38:04.317091098Z" level=info msg="StartContainer for \"a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257\" returns successfully" Mar 13 00:38:04.360222 systemd[1]: cri-containerd-a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257.scope: Deactivated successfully. Mar 13 00:38:04.362154 containerd[1547]: time="2026-03-13T00:38:04.362083698Z" level=info msg="received container exit event container_id:\"a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257\" id:\"a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257\" pid:3388 exited_at:{seconds:1773362284 nanos:361460308}" Mar 13 00:38:04.480857 containerd[1547]: time="2026-03-13T00:38:04.480243958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:38:05.158719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a93de5b17a2429f977ad83f1d1ff77e201f0c79bbee727e2afcec75875d32257-rootfs.mount: Deactivated successfully. Mar 13 00:38:05.384079 kubelet[2724]: E0313 00:38:05.384026 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npsmc" podUID="53df9c22-fa7e-4b17-8b53-e9ef874e7bac" Mar 13 00:38:06.055329 containerd[1547]: time="2026-03-13T00:38:06.055263318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:06.056737 containerd[1547]: time="2026-03-13T00:38:06.056694808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:38:06.058235 containerd[1547]: time="2026-03-13T00:38:06.057178358Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:06.059236 containerd[1547]: time="2026-03-13T00:38:06.059179358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:06.059880 containerd[1547]: time="2026-03-13T00:38:06.059858618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.57888161s" Mar 13 00:38:06.059962 containerd[1547]: time="2026-03-13T00:38:06.059948398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:38:06.063240 containerd[1547]: time="2026-03-13T00:38:06.063191548Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:38:06.073354 containerd[1547]: time="2026-03-13T00:38:06.072348548Z" level=info msg="Container 083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:06.083462 containerd[1547]: time="2026-03-13T00:38:06.083433788Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a\"" Mar 13 00:38:06.084298 containerd[1547]: time="2026-03-13T00:38:06.084259708Z" level=info msg="StartContainer for \"083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a\"" Mar 13 00:38:06.085665 containerd[1547]: time="2026-03-13T00:38:06.085642718Z" level=info msg="connecting to shim 083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a" address="unix:///run/containerd/s/3bc53d56022d46318163cec6733e25022627e96000046f91b7385ec484cb0a8e" protocol=ttrpc version=3 Mar 13 00:38:06.119414 systemd[1]: Started cri-containerd-083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a.scope - libcontainer container 083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a. Mar 13 00:38:06.224546 containerd[1547]: time="2026-03-13T00:38:06.224439758Z" level=info msg="StartContainer for \"083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a\" returns successfully" Mar 13 00:38:06.802040 containerd[1547]: time="2026-03-13T00:38:06.801980088Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:38:06.805308 systemd[1]: cri-containerd-083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a.scope: Deactivated successfully. Mar 13 00:38:06.805701 systemd[1]: cri-containerd-083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a.scope: Consumed 570ms CPU time, 193.7M memory peak, 1.9M read from disk, 177M written to disk. Mar 13 00:38:06.809224 containerd[1547]: time="2026-03-13T00:38:06.809162288Z" level=info msg="received container exit event container_id:\"083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a\" id:\"083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a\" pid:3444 exited_at:{seconds:1773362286 nanos:808649258}" Mar 13 00:38:06.858097 kubelet[2724]: I0313 00:38:06.857541 2724 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 00:38:06.875213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-083f81f286bcc24100a2455273deb2375bafa4c8a8666931e479e65b486d603a-rootfs.mount: Deactivated successfully. Mar 13 00:38:06.942226 systemd[1]: Created slice kubepods-besteffort-poddd18cfae_69a6_47ae_8e71_672f8d18f675.slice - libcontainer container kubepods-besteffort-poddd18cfae_69a6_47ae_8e71_672f8d18f675.slice. Mar 13 00:38:06.953192 systemd[1]: Created slice kubepods-besteffort-podce304d33_4bf0_4446_8c15_afbb30b57a37.slice - libcontainer container kubepods-besteffort-podce304d33_4bf0_4446_8c15_afbb30b57a37.slice. Mar 13 00:38:06.959550 systemd[1]: Created slice kubepods-besteffort-podbe52a627_668e_446b_a54a_44e513eacab5.slice - libcontainer container kubepods-besteffort-podbe52a627_668e_446b_a54a_44e513eacab5.slice. Mar 13 00:38:06.968004 systemd[1]: Created slice kubepods-besteffort-podafd087c8_1a42_49f5_ab0b_7984fdd56d7a.slice - libcontainer container kubepods-besteffort-podafd087c8_1a42_49f5_ab0b_7984fdd56d7a.slice. Mar 13 00:38:06.977652 systemd[1]: Created slice kubepods-burstable-pod69732a13_74fe_417e_ae4a_16962aedcd17.slice - libcontainer container kubepods-burstable-pod69732a13_74fe_417e_ae4a_16962aedcd17.slice. Mar 13 00:38:06.981758 kubelet[2724]: I0313 00:38:06.981166 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd18cfae-69a6-47ae-8e71-672f8d18f675-config\") pod \"goldmane-5b85766d88-tsmbw\" (UID: \"dd18cfae-69a6-47ae-8e71-672f8d18f675\") " pod="calico-system/goldmane-5b85766d88-tsmbw" Mar 13 00:38:06.981758 kubelet[2724]: I0313 00:38:06.981196 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-backend-key-pair\") pod \"whisker-58cc4959f7-szlkp\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " pod="calico-system/whisker-58cc4959f7-szlkp" Mar 13 00:38:06.981758 kubelet[2724]: I0313 00:38:06.981217 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be52a627-668e-446b-a54a-44e513eacab5-calico-apiserver-certs\") pod \"calico-apiserver-5b66459b76-24wfm\" (UID: \"be52a627-668e-446b-a54a-44e513eacab5\") " pod="calico-system/calico-apiserver-5b66459b76-24wfm" Mar 13 00:38:06.981758 kubelet[2724]: I0313 00:38:06.981235 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69732a13-74fe-417e-ae4a-16962aedcd17-config-volume\") pod \"coredns-674b8bbfcf-r7txz\" (UID: \"69732a13-74fe-417e-ae4a-16962aedcd17\") " pod="kube-system/coredns-674b8bbfcf-r7txz" Mar 13 00:38:06.981758 kubelet[2724]: I0313 00:38:06.981252 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjcl\" (UniqueName: \"kubernetes.io/projected/69732a13-74fe-417e-ae4a-16962aedcd17-kube-api-access-bgjcl\") pod \"coredns-674b8bbfcf-r7txz\" (UID: \"69732a13-74fe-417e-ae4a-16962aedcd17\") " pod="kube-system/coredns-674b8bbfcf-r7txz" Mar 13 00:38:06.983938 kubelet[2724]: I0313 00:38:06.983497 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd18cfae-69a6-47ae-8e71-672f8d18f675-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-tsmbw\" (UID: \"dd18cfae-69a6-47ae-8e71-672f8d18f675\") " pod="calico-system/goldmane-5b85766d88-tsmbw" Mar 13 00:38:06.983938 kubelet[2724]: I0313 00:38:06.983533 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-nginx-config\") pod \"whisker-58cc4959f7-szlkp\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " pod="calico-system/whisker-58cc4959f7-szlkp" Mar 13 00:38:06.983938 kubelet[2724]: I0313 00:38:06.983551 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnb6m\" (UniqueName: \"kubernetes.io/projected/be52a627-668e-446b-a54a-44e513eacab5-kube-api-access-nnb6m\") pod \"calico-apiserver-5b66459b76-24wfm\" (UID: \"be52a627-668e-446b-a54a-44e513eacab5\") " pod="calico-system/calico-apiserver-5b66459b76-24wfm" Mar 13 00:38:06.983938 kubelet[2724]: I0313 00:38:06.983566 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c9sc\" (UniqueName: \"kubernetes.io/projected/dd18cfae-69a6-47ae-8e71-672f8d18f675-kube-api-access-4c9sc\") pod \"goldmane-5b85766d88-tsmbw\" (UID: \"dd18cfae-69a6-47ae-8e71-672f8d18f675\") " pod="calico-system/goldmane-5b85766d88-tsmbw" Mar 13 00:38:06.983938 kubelet[2724]: I0313 00:38:06.983582 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/afd087c8-1a42-49f5-ab0b-7984fdd56d7a-calico-apiserver-certs\") pod \"calico-apiserver-5b66459b76-p78zh\" (UID: \"afd087c8-1a42-49f5-ab0b-7984fdd56d7a\") " pod="calico-system/calico-apiserver-5b66459b76-p78zh" Mar 13 00:38:06.984086 kubelet[2724]: I0313 00:38:06.983596 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-ca-bundle\") pod \"whisker-58cc4959f7-szlkp\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " pod="calico-system/whisker-58cc4959f7-szlkp" Mar 13 00:38:06.984086 kubelet[2724]: I0313 00:38:06.983611 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vh6\" (UniqueName: \"kubernetes.io/projected/ce304d33-4bf0-4446-8c15-afbb30b57a37-kube-api-access-x9vh6\") pod \"whisker-58cc4959f7-szlkp\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " pod="calico-system/whisker-58cc4959f7-szlkp" Mar 13 00:38:06.984086 kubelet[2724]: I0313 00:38:06.983628 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93729285-a189-4abc-ba2b-049bef976bb3-config-volume\") pod \"coredns-674b8bbfcf-wwppt\" (UID: \"93729285-a189-4abc-ba2b-049bef976bb3\") " pod="kube-system/coredns-674b8bbfcf-wwppt" Mar 13 00:38:06.984086 kubelet[2724]: I0313 00:38:06.983645 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6rq\" (UniqueName: \"kubernetes.io/projected/c0e98f25-f75c-4aab-873f-d1379f52de43-kube-api-access-bs6rq\") pod \"calico-kube-controllers-7cbbd89d84-9crw6\" (UID: \"c0e98f25-f75c-4aab-873f-d1379f52de43\") " pod="calico-system/calico-kube-controllers-7cbbd89d84-9crw6" Mar 13 00:38:06.984923 kubelet[2724]: I0313 00:38:06.984541 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bg8\" (UniqueName: \"kubernetes.io/projected/afd087c8-1a42-49f5-ab0b-7984fdd56d7a-kube-api-access-v4bg8\") pod \"calico-apiserver-5b66459b76-p78zh\" (UID: \"afd087c8-1a42-49f5-ab0b-7984fdd56d7a\") " pod="calico-system/calico-apiserver-5b66459b76-p78zh" Mar 13 00:38:06.984923 kubelet[2724]: I0313 00:38:06.984776 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e98f25-f75c-4aab-873f-d1379f52de43-tigera-ca-bundle\") pod \"calico-kube-controllers-7cbbd89d84-9crw6\" (UID: \"c0e98f25-f75c-4aab-873f-d1379f52de43\") " pod="calico-system/calico-kube-controllers-7cbbd89d84-9crw6" Mar 13 00:38:06.984923 kubelet[2724]: I0313 00:38:06.984842 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dd18cfae-69a6-47ae-8e71-672f8d18f675-goldmane-key-pair\") pod \"goldmane-5b85766d88-tsmbw\" (UID: \"dd18cfae-69a6-47ae-8e71-672f8d18f675\") " pod="calico-system/goldmane-5b85766d88-tsmbw" Mar 13 00:38:06.984923 kubelet[2724]: I0313 00:38:06.984866 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d425n\" (UniqueName: \"kubernetes.io/projected/93729285-a189-4abc-ba2b-049bef976bb3-kube-api-access-d425n\") pod \"coredns-674b8bbfcf-wwppt\" (UID: \"93729285-a189-4abc-ba2b-049bef976bb3\") " pod="kube-system/coredns-674b8bbfcf-wwppt" Mar 13 00:38:06.993405 systemd[1]: Created slice kubepods-besteffort-podc0e98f25_f75c_4aab_873f_d1379f52de43.slice - libcontainer container kubepods-besteffort-podc0e98f25_f75c_4aab_873f_d1379f52de43.slice. Mar 13 00:38:07.000297 systemd[1]: Created slice kubepods-burstable-pod93729285_a189_4abc_ba2b_049bef976bb3.slice - libcontainer container kubepods-burstable-pod93729285_a189_4abc_ba2b_049bef976bb3.slice. Mar 13 00:38:07.248774 containerd[1547]: time="2026-03-13T00:38:07.248656008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tsmbw,Uid:dd18cfae-69a6-47ae-8e71-672f8d18f675,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:07.258679 containerd[1547]: time="2026-03-13T00:38:07.258620018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cc4959f7-szlkp,Uid:ce304d33-4bf0-4446-8c15-afbb30b57a37,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:07.266410 containerd[1547]: time="2026-03-13T00:38:07.266378738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-24wfm,Uid:be52a627-668e-446b-a54a-44e513eacab5,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:07.275313 containerd[1547]: time="2026-03-13T00:38:07.275071578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-p78zh,Uid:afd087c8-1a42-49f5-ab0b-7984fdd56d7a,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:07.290972 kubelet[2724]: E0313 00:38:07.290926 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:07.295665 containerd[1547]: time="2026-03-13T00:38:07.293577298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r7txz,Uid:69732a13-74fe-417e-ae4a-16962aedcd17,Namespace:kube-system,Attempt:0,}" Mar 13 00:38:07.314675 kubelet[2724]: E0313 00:38:07.314644 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:07.317915 containerd[1547]: time="2026-03-13T00:38:07.317861958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbbd89d84-9crw6,Uid:c0e98f25-f75c-4aab-873f-d1379f52de43,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:07.323286 containerd[1547]: time="2026-03-13T00:38:07.322408478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwppt,Uid:93729285-a189-4abc-ba2b-049bef976bb3,Namespace:kube-system,Attempt:0,}" Mar 13 00:38:07.392784 systemd[1]: Created slice kubepods-besteffort-pod53df9c22_fa7e_4b17_8b53_e9ef874e7bac.slice - libcontainer container kubepods-besteffort-pod53df9c22_fa7e_4b17_8b53_e9ef874e7bac.slice. Mar 13 00:38:07.398414 containerd[1547]: time="2026-03-13T00:38:07.398245248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npsmc,Uid:53df9c22-fa7e-4b17-8b53-e9ef874e7bac,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:07.482770 containerd[1547]: time="2026-03-13T00:38:07.482685378Z" level=error msg="Failed to destroy network for sandbox \"068d3d92e37857475177cc43ae46b6828fc0e952f867fcd0a34d9c57585a3f87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.486292 containerd[1547]: time="2026-03-13T00:38:07.486231938Z" level=error msg="Failed to destroy network for sandbox \"ae428e240b6e1981fe175a115519bcfd0cadab069510ef84bde67551cbce58eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.486683 containerd[1547]: time="2026-03-13T00:38:07.486637948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-p78zh,Uid:afd087c8-1a42-49f5-ab0b-7984fdd56d7a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"068d3d92e37857475177cc43ae46b6828fc0e952f867fcd0a34d9c57585a3f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.487106 kubelet[2724]: E0313 00:38:07.487013 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068d3d92e37857475177cc43ae46b6828fc0e952f867fcd0a34d9c57585a3f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.487895 kubelet[2724]: E0313 00:38:07.487616 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068d3d92e37857475177cc43ae46b6828fc0e952f867fcd0a34d9c57585a3f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b66459b76-p78zh" Mar 13 00:38:07.487895 kubelet[2724]: E0313 00:38:07.487675 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068d3d92e37857475177cc43ae46b6828fc0e952f867fcd0a34d9c57585a3f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b66459b76-p78zh" Mar 13 00:38:07.487895 kubelet[2724]: E0313 00:38:07.487763 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b66459b76-p78zh_calico-system(afd087c8-1a42-49f5-ab0b-7984fdd56d7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b66459b76-p78zh_calico-system(afd087c8-1a42-49f5-ab0b-7984fdd56d7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"068d3d92e37857475177cc43ae46b6828fc0e952f867fcd0a34d9c57585a3f87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5b66459b76-p78zh" podUID="afd087c8-1a42-49f5-ab0b-7984fdd56d7a" Mar 13 00:38:07.489331 containerd[1547]: time="2026-03-13T00:38:07.489237688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cc4959f7-szlkp,Uid:ce304d33-4bf0-4446-8c15-afbb30b57a37,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae428e240b6e1981fe175a115519bcfd0cadab069510ef84bde67551cbce58eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.490025 kubelet[2724]: E0313 00:38:07.489497 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae428e240b6e1981fe175a115519bcfd0cadab069510ef84bde67551cbce58eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.490025 kubelet[2724]: E0313 00:38:07.489564 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae428e240b6e1981fe175a115519bcfd0cadab069510ef84bde67551cbce58eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58cc4959f7-szlkp" Mar 13 00:38:07.490025 kubelet[2724]: E0313 00:38:07.489593 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae428e240b6e1981fe175a115519bcfd0cadab069510ef84bde67551cbce58eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58cc4959f7-szlkp" Mar 13 00:38:07.490132 kubelet[2724]: E0313 00:38:07.489656 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58cc4959f7-szlkp_calico-system(ce304d33-4bf0-4446-8c15-afbb30b57a37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58cc4959f7-szlkp_calico-system(ce304d33-4bf0-4446-8c15-afbb30b57a37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae428e240b6e1981fe175a115519bcfd0cadab069510ef84bde67551cbce58eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58cc4959f7-szlkp" podUID="ce304d33-4bf0-4446-8c15-afbb30b57a37" Mar 13 00:38:07.503225 containerd[1547]: time="2026-03-13T00:38:07.502952378Z" level=error msg="Failed to destroy network for sandbox \"b57aa14fa64259f0653a520b7d2ba3758967981296a19bd21634578fe82e7be2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.506790 containerd[1547]: time="2026-03-13T00:38:07.506756598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r7txz,Uid:69732a13-74fe-417e-ae4a-16962aedcd17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b57aa14fa64259f0653a520b7d2ba3758967981296a19bd21634578fe82e7be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.507040 kubelet[2724]: E0313 00:38:07.507006 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b57aa14fa64259f0653a520b7d2ba3758967981296a19bd21634578fe82e7be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.507095 kubelet[2724]: E0313 00:38:07.507072 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b57aa14fa64259f0653a520b7d2ba3758967981296a19bd21634578fe82e7be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r7txz" Mar 13 00:38:07.507127 kubelet[2724]: E0313 00:38:07.507099 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b57aa14fa64259f0653a520b7d2ba3758967981296a19bd21634578fe82e7be2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r7txz" Mar 13 00:38:07.507233 kubelet[2724]: E0313 00:38:07.507141 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r7txz_kube-system(69732a13-74fe-417e-ae4a-16962aedcd17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r7txz_kube-system(69732a13-74fe-417e-ae4a-16962aedcd17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b57aa14fa64259f0653a520b7d2ba3758967981296a19bd21634578fe82e7be2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r7txz" podUID="69732a13-74fe-417e-ae4a-16962aedcd17" Mar 13 00:38:07.532069 containerd[1547]: time="2026-03-13T00:38:07.531565408Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:38:07.552441 containerd[1547]: time="2026-03-13T00:38:07.552398858Z" level=error msg="Failed to destroy network for sandbox \"6c6384ff486ab9a7a985cb2df2e02f8637a29cdc392e70bbf79d90af84a81cb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.558122 containerd[1547]: time="2026-03-13T00:38:07.557820578Z" level=info msg="Container 413ec2cd9fd55a151c811b5bd9f5e92715eba1e4c9ccc5562ea0db489594651f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:07.558642 containerd[1547]: time="2026-03-13T00:38:07.558610618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbbd89d84-9crw6,Uid:c0e98f25-f75c-4aab-873f-d1379f52de43,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6384ff486ab9a7a985cb2df2e02f8637a29cdc392e70bbf79d90af84a81cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.559934 kubelet[2724]: E0313 00:38:07.559875 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6384ff486ab9a7a985cb2df2e02f8637a29cdc392e70bbf79d90af84a81cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.560029 kubelet[2724]: E0313 00:38:07.559958 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6384ff486ab9a7a985cb2df2e02f8637a29cdc392e70bbf79d90af84a81cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cbbd89d84-9crw6" Mar 13 00:38:07.560029 kubelet[2724]: E0313 00:38:07.559982 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6384ff486ab9a7a985cb2df2e02f8637a29cdc392e70bbf79d90af84a81cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cbbd89d84-9crw6" Mar 13 00:38:07.560192 kubelet[2724]: E0313 00:38:07.560135 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cbbd89d84-9crw6_calico-system(c0e98f25-f75c-4aab-873f-d1379f52de43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cbbd89d84-9crw6_calico-system(c0e98f25-f75c-4aab-873f-d1379f52de43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c6384ff486ab9a7a985cb2df2e02f8637a29cdc392e70bbf79d90af84a81cb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cbbd89d84-9crw6" podUID="c0e98f25-f75c-4aab-873f-d1379f52de43" Mar 13 00:38:07.571492 containerd[1547]: time="2026-03-13T00:38:07.571312708Z" level=error msg="Failed to destroy network for sandbox \"cc1754b73b8efb9cf58694b8220d9f55316c1396ebd41aa9206ed76a2634110e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.572873 containerd[1547]: time="2026-03-13T00:38:07.572824288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-24wfm,Uid:be52a627-668e-446b-a54a-44e513eacab5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc1754b73b8efb9cf58694b8220d9f55316c1396ebd41aa9206ed76a2634110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.573372 kubelet[2724]: E0313 00:38:07.573133 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc1754b73b8efb9cf58694b8220d9f55316c1396ebd41aa9206ed76a2634110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.573372 kubelet[2724]: E0313 00:38:07.573220 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc1754b73b8efb9cf58694b8220d9f55316c1396ebd41aa9206ed76a2634110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b66459b76-24wfm" Mar 13 00:38:07.573372 kubelet[2724]: E0313 00:38:07.573252 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc1754b73b8efb9cf58694b8220d9f55316c1396ebd41aa9206ed76a2634110e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b66459b76-24wfm" Mar 13 00:38:07.574137 kubelet[2724]: E0313 00:38:07.573351 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b66459b76-24wfm_calico-system(be52a627-668e-446b-a54a-44e513eacab5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b66459b76-24wfm_calico-system(be52a627-668e-446b-a54a-44e513eacab5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc1754b73b8efb9cf58694b8220d9f55316c1396ebd41aa9206ed76a2634110e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5b66459b76-24wfm" podUID="be52a627-668e-446b-a54a-44e513eacab5" Mar 13 00:38:07.574802 containerd[1547]: time="2026-03-13T00:38:07.574686138Z" level=info msg="CreateContainer within sandbox \"de08038339b9ee9c902b624adece33cbbdfc2dab46e31365f6d4d50eea5dcf1b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"413ec2cd9fd55a151c811b5bd9f5e92715eba1e4c9ccc5562ea0db489594651f\"" Mar 13 00:38:07.575371 containerd[1547]: time="2026-03-13T00:38:07.575229908Z" level=info msg="StartContainer for \"413ec2cd9fd55a151c811b5bd9f5e92715eba1e4c9ccc5562ea0db489594651f\"" Mar 13 00:38:07.577107 containerd[1547]: time="2026-03-13T00:38:07.577037978Z" level=info msg="connecting to shim 413ec2cd9fd55a151c811b5bd9f5e92715eba1e4c9ccc5562ea0db489594651f" address="unix:///run/containerd/s/3bc53d56022d46318163cec6733e25022627e96000046f91b7385ec484cb0a8e" protocol=ttrpc version=3 Mar 13 00:38:07.585502 containerd[1547]: time="2026-03-13T00:38:07.585466418Z" level=error msg="Failed to destroy network for sandbox \"cf518a8d4a0eb6afc9a26373db9167886b31c72351eeb468320838568fca06f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.586807 containerd[1547]: time="2026-03-13T00:38:07.586779658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tsmbw,Uid:dd18cfae-69a6-47ae-8e71-672f8d18f675,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf518a8d4a0eb6afc9a26373db9167886b31c72351eeb468320838568fca06f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.587111 kubelet[2724]: E0313 00:38:07.587038 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf518a8d4a0eb6afc9a26373db9167886b31c72351eeb468320838568fca06f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.587207 kubelet[2724]: E0313 00:38:07.587190 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf518a8d4a0eb6afc9a26373db9167886b31c72351eeb468320838568fca06f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-tsmbw" Mar 13 00:38:07.587412 kubelet[2724]: E0313 00:38:07.587250 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf518a8d4a0eb6afc9a26373db9167886b31c72351eeb468320838568fca06f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-tsmbw" Mar 13 00:38:07.587624 kubelet[2724]: E0313 00:38:07.587569 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-tsmbw_calico-system(dd18cfae-69a6-47ae-8e71-672f8d18f675)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-tsmbw_calico-system(dd18cfae-69a6-47ae-8e71-672f8d18f675)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf518a8d4a0eb6afc9a26373db9167886b31c72351eeb468320838568fca06f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-tsmbw" podUID="dd18cfae-69a6-47ae-8e71-672f8d18f675" Mar 13 00:38:07.605467 systemd[1]: Started cri-containerd-413ec2cd9fd55a151c811b5bd9f5e92715eba1e4c9ccc5562ea0db489594651f.scope - libcontainer container 413ec2cd9fd55a151c811b5bd9f5e92715eba1e4c9ccc5562ea0db489594651f. Mar 13 00:38:07.608079 containerd[1547]: time="2026-03-13T00:38:07.608031348Z" level=error msg="Failed to destroy network for sandbox \"f0a86a8f840a09528e692820684e8efa721f3bacfb58193a9909c5eeac5c0d2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.610296 containerd[1547]: time="2026-03-13T00:38:07.610191278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwppt,Uid:93729285-a189-4abc-ba2b-049bef976bb3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a86a8f840a09528e692820684e8efa721f3bacfb58193a9909c5eeac5c0d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.610864 kubelet[2724]: E0313 00:38:07.610556 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a86a8f840a09528e692820684e8efa721f3bacfb58193a9909c5eeac5c0d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.610864 kubelet[2724]: E0313 00:38:07.610631 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a86a8f840a09528e692820684e8efa721f3bacfb58193a9909c5eeac5c0d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wwppt" Mar 13 00:38:07.610864 kubelet[2724]: E0313 00:38:07.610657 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a86a8f840a09528e692820684e8efa721f3bacfb58193a9909c5eeac5c0d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wwppt" Mar 13 00:38:07.611001 kubelet[2724]: E0313 00:38:07.610713 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wwppt_kube-system(93729285-a189-4abc-ba2b-049bef976bb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wwppt_kube-system(93729285-a189-4abc-ba2b-049bef976bb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0a86a8f840a09528e692820684e8efa721f3bacfb58193a9909c5eeac5c0d2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wwppt" podUID="93729285-a189-4abc-ba2b-049bef976bb3" Mar 13 00:38:07.624868 containerd[1547]: time="2026-03-13T00:38:07.624824538Z" level=error msg="Failed to destroy network for sandbox \"e81db92b250a7cf067779954a3cf8360e2beec05657ca61a9c8ea8ba5630b1dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.625802 containerd[1547]: time="2026-03-13T00:38:07.625731448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npsmc,Uid:53df9c22-fa7e-4b17-8b53-e9ef874e7bac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81db92b250a7cf067779954a3cf8360e2beec05657ca61a9c8ea8ba5630b1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.625986 kubelet[2724]: E0313 00:38:07.625950 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81db92b250a7cf067779954a3cf8360e2beec05657ca61a9c8ea8ba5630b1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:38:07.626045 kubelet[2724]: E0313 00:38:07.626005 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81db92b250a7cf067779954a3cf8360e2beec05657ca61a9c8ea8ba5630b1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-npsmc" Mar 13 00:38:07.626045 kubelet[2724]: E0313 00:38:07.626027 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81db92b250a7cf067779954a3cf8360e2beec05657ca61a9c8ea8ba5630b1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-npsmc" Mar 13 00:38:07.626129 kubelet[2724]: E0313 00:38:07.626096 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-npsmc_calico-system(53df9c22-fa7e-4b17-8b53-e9ef874e7bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-npsmc_calico-system(53df9c22-fa7e-4b17-8b53-e9ef874e7bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e81db92b250a7cf067779954a3cf8360e2beec05657ca61a9c8ea8ba5630b1dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-npsmc" podUID="53df9c22-fa7e-4b17-8b53-e9ef874e7bac" Mar 13 00:38:07.680659 containerd[1547]: time="2026-03-13T00:38:07.680618018Z" level=info msg="StartContainer for \"413ec2cd9fd55a151c811b5bd9f5e92715eba1e4c9ccc5562ea0db489594651f\" returns successfully" Mar 13 00:38:07.893625 kubelet[2724]: I0313 00:38:07.893578 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-nginx-config\") pod \"ce304d33-4bf0-4446-8c15-afbb30b57a37\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " Mar 13 00:38:07.894670 kubelet[2724]: I0313 00:38:07.894637 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9vh6\" (UniqueName: \"kubernetes.io/projected/ce304d33-4bf0-4446-8c15-afbb30b57a37-kube-api-access-x9vh6\") pod \"ce304d33-4bf0-4446-8c15-afbb30b57a37\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " Mar 13 00:38:07.894997 kubelet[2724]: I0313 00:38:07.894690 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-ca-bundle\") pod \"ce304d33-4bf0-4446-8c15-afbb30b57a37\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " Mar 13 00:38:07.895056 kubelet[2724]: I0313 00:38:07.895002 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-backend-key-pair\") pod \"ce304d33-4bf0-4446-8c15-afbb30b57a37\" (UID: \"ce304d33-4bf0-4446-8c15-afbb30b57a37\") " Mar 13 00:38:07.899454 kubelet[2724]: I0313 00:38:07.896978 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ce304d33-4bf0-4446-8c15-afbb30b57a37" (UID: "ce304d33-4bf0-4446-8c15-afbb30b57a37"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:38:07.901875 kubelet[2724]: I0313 00:38:07.901596 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce304d33-4bf0-4446-8c15-afbb30b57a37" (UID: "ce304d33-4bf0-4446-8c15-afbb30b57a37"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:38:07.909491 kubelet[2724]: I0313 00:38:07.909391 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce304d33-4bf0-4446-8c15-afbb30b57a37-kube-api-access-x9vh6" (OuterVolumeSpecName: "kube-api-access-x9vh6") pod "ce304d33-4bf0-4446-8c15-afbb30b57a37" (UID: "ce304d33-4bf0-4446-8c15-afbb30b57a37"). InnerVolumeSpecName "kube-api-access-x9vh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:38:07.909491 kubelet[2724]: I0313 00:38:07.909407 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce304d33-4bf0-4446-8c15-afbb30b57a37" (UID: "ce304d33-4bf0-4446-8c15-afbb30b57a37"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:38:07.910002 systemd[1]: var-lib-kubelet-pods-ce304d33\x2d4bf0\x2d4446\x2d8c15\x2dafbb30b57a37-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:38:07.913375 systemd[1]: var-lib-kubelet-pods-ce304d33\x2d4bf0\x2d4446\x2d8c15\x2dafbb30b57a37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx9vh6.mount: Deactivated successfully. Mar 13 00:38:07.996799 kubelet[2724]: I0313 00:38:07.996743 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-ca-bundle\") on node \"172-234-197-95\" DevicePath \"\"" Mar 13 00:38:07.996934 kubelet[2724]: I0313 00:38:07.996853 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce304d33-4bf0-4446-8c15-afbb30b57a37-whisker-backend-key-pair\") on node \"172-234-197-95\" DevicePath \"\"" Mar 13 00:38:07.996934 kubelet[2724]: I0313 00:38:07.996897 2724 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ce304d33-4bf0-4446-8c15-afbb30b57a37-nginx-config\") on node \"172-234-197-95\" DevicePath \"\"" Mar 13 00:38:07.996934 kubelet[2724]: I0313 00:38:07.996909 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x9vh6\" (UniqueName: \"kubernetes.io/projected/ce304d33-4bf0-4446-8c15-afbb30b57a37-kube-api-access-x9vh6\") on node \"172-234-197-95\" DevicePath \"\"" Mar 13 00:38:08.391684 systemd[1]: Removed slice kubepods-besteffort-podce304d33_4bf0_4446_8c15_afbb30b57a37.slice - libcontainer container kubepods-besteffort-podce304d33_4bf0_4446_8c15_afbb30b57a37.slice. Mar 13 00:38:08.532152 kubelet[2724]: I0313 00:38:08.532066 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f6dz7" podStartSLOduration=3.616885168 podStartE2EDuration="11.532051598s" podCreationTimestamp="2026-03-13 00:37:57 +0000 UTC" firstStartedPulling="2026-03-13 00:37:58.145520698 +0000 UTC m=+19.850202411" lastFinishedPulling="2026-03-13 00:38:06.060687128 +0000 UTC m=+27.765368841" observedRunningTime="2026-03-13 00:38:08.529522948 +0000 UTC m=+30.234204661" watchObservedRunningTime="2026-03-13 00:38:08.532051598 +0000 UTC m=+30.236733311" Mar 13 00:38:08.579167 systemd[1]: Created slice kubepods-besteffort-pod2774dc1a_45db_42d2_8354_3a6af4cd1dd7.slice - libcontainer container kubepods-besteffort-pod2774dc1a_45db_42d2_8354_3a6af4cd1dd7.slice. Mar 13 00:38:08.702211 kubelet[2724]: I0313 00:38:08.702030 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2774dc1a-45db-42d2-8354-3a6af4cd1dd7-nginx-config\") pod \"whisker-8c76f46f8-cblqt\" (UID: \"2774dc1a-45db-42d2-8354-3a6af4cd1dd7\") " pod="calico-system/whisker-8c76f46f8-cblqt" Mar 13 00:38:08.702211 kubelet[2724]: I0313 00:38:08.702103 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbnqj\" (UniqueName: \"kubernetes.io/projected/2774dc1a-45db-42d2-8354-3a6af4cd1dd7-kube-api-access-vbnqj\") pod \"whisker-8c76f46f8-cblqt\" (UID: \"2774dc1a-45db-42d2-8354-3a6af4cd1dd7\") " pod="calico-system/whisker-8c76f46f8-cblqt" Mar 13 00:38:08.702211 kubelet[2724]: I0313 00:38:08.702128 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2774dc1a-45db-42d2-8354-3a6af4cd1dd7-whisker-backend-key-pair\") pod \"whisker-8c76f46f8-cblqt\" (UID: \"2774dc1a-45db-42d2-8354-3a6af4cd1dd7\") " pod="calico-system/whisker-8c76f46f8-cblqt" Mar 13 00:38:08.702211 kubelet[2724]: I0313 00:38:08.702174 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2774dc1a-45db-42d2-8354-3a6af4cd1dd7-whisker-ca-bundle\") pod \"whisker-8c76f46f8-cblqt\" (UID: \"2774dc1a-45db-42d2-8354-3a6af4cd1dd7\") " pod="calico-system/whisker-8c76f46f8-cblqt" Mar 13 00:38:08.883403 containerd[1547]: time="2026-03-13T00:38:08.883306868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c76f46f8-cblqt,Uid:2774dc1a-45db-42d2-8354-3a6af4cd1dd7,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:08.998305 systemd-networkd[1427]: calif52094d4f6a: Link UP Mar 13 00:38:08.999511 systemd-networkd[1427]: calif52094d4f6a: Gained carrier Mar 13 00:38:09.024448 containerd[1547]: 2026-03-13 00:38:08.913 [ERROR][3753] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:38:09.024448 containerd[1547]: 2026-03-13 00:38:08.930 [INFO][3753] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0 whisker-8c76f46f8- calico-system 2774dc1a-45db-42d2-8354-3a6af4cd1dd7 940 0 2026-03-13 00:38:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8c76f46f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-197-95 whisker-8c76f46f8-cblqt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif52094d4f6a [] [] }} ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-" Mar 13 00:38:09.024448 containerd[1547]: 2026-03-13 00:38:08.930 [INFO][3753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" Mar 13 00:38:09.024448 containerd[1547]: 2026-03-13 00:38:08.957 [INFO][3765] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" HandleID="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Workload="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.963 [INFO][3765] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" HandleID="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Workload="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277250), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-197-95", "pod":"whisker-8c76f46f8-cblqt", "timestamp":"2026-03-13 00:38:08.957457748 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001dedc0)} Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.963 [INFO][3765] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.963 [INFO][3765] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.963 [INFO][3765] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.965 [INFO][3765] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" host="172-234-197-95" Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.968 [INFO][3765] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.972 [INFO][3765] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.974 [INFO][3765] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:09.024708 containerd[1547]: 2026-03-13 00:38:08.976 [INFO][3765] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:09.024893 containerd[1547]: 2026-03-13 00:38:08.976 [INFO][3765] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" host="172-234-197-95" Mar 13 00:38:09.024893 containerd[1547]: 2026-03-13 00:38:08.977 [INFO][3765] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471 Mar 13 00:38:09.024893 containerd[1547]: 2026-03-13 00:38:08.980 [INFO][3765] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" host="172-234-197-95" Mar 13 00:38:09.024893 containerd[1547]: 2026-03-13 00:38:08.985 [INFO][3765] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.65/26] block=192.168.79.64/26 handle="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" host="172-234-197-95" Mar 13 00:38:09.024893 containerd[1547]: 2026-03-13 00:38:08.985 [INFO][3765] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.65/26] handle="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" host="172-234-197-95" Mar 13 00:38:09.024893 containerd[1547]: 2026-03-13 00:38:08.985 [INFO][3765] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:09.024893 containerd[1547]: 2026-03-13 00:38:08.985 [INFO][3765] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.65/26] IPv6=[] ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" HandleID="k8s-pod-network.3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Workload="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" Mar 13 00:38:09.025021 containerd[1547]: 2026-03-13 00:38:08.989 [INFO][3753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0", GenerateName:"whisker-8c76f46f8-", Namespace:"calico-system", SelfLink:"", UID:"2774dc1a-45db-42d2-8354-3a6af4cd1dd7", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8c76f46f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"whisker-8c76f46f8-cblqt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif52094d4f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:09.025021 containerd[1547]: 2026-03-13 00:38:08.989 [INFO][3753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.65/32] ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" Mar 13 00:38:09.025108 containerd[1547]: 2026-03-13 00:38:08.989 [INFO][3753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif52094d4f6a ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" Mar 13 00:38:09.025108 containerd[1547]: 2026-03-13 00:38:08.999 [INFO][3753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" Mar 13 00:38:09.025171 containerd[1547]: 2026-03-13 00:38:09.000 [INFO][3753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0", GenerateName:"whisker-8c76f46f8-", Namespace:"calico-system", SelfLink:"", UID:"2774dc1a-45db-42d2-8354-3a6af4cd1dd7", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8c76f46f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471", Pod:"whisker-8c76f46f8-cblqt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif52094d4f6a", MAC:"4e:f2:dc:21:60:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:09.025226 containerd[1547]: 2026-03-13 00:38:09.012 [INFO][3753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" Namespace="calico-system" Pod="whisker-8c76f46f8-cblqt" WorkloadEndpoint="172--234--197--95-k8s-whisker--8c76f46f8--cblqt-eth0" Mar 13 00:38:09.077305 containerd[1547]: time="2026-03-13T00:38:09.077208218Z" level=info msg="connecting to shim 3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471" address="unix:///run/containerd/s/4c708e2361ce43f6c4c71d385a4a53b2e0f57ad94e2ef86b53b29ea33e9294ec" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:09.125566 systemd[1]: Started cri-containerd-3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471.scope - libcontainer container 3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471. Mar 13 00:38:09.241924 containerd[1547]: time="2026-03-13T00:38:09.241839738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c76f46f8-cblqt,Uid:2774dc1a-45db-42d2-8354-3a6af4cd1dd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471\"" Mar 13 00:38:09.244399 containerd[1547]: time="2026-03-13T00:38:09.244348298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:38:09.517302 kubelet[2724]: I0313 00:38:09.517220 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:38:10.088362 systemd-networkd[1427]: vxlan.calico: Link UP Mar 13 00:38:10.088373 systemd-networkd[1427]: vxlan.calico: Gained carrier Mar 13 00:38:10.389071 kubelet[2724]: I0313 00:38:10.388941 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce304d33-4bf0-4446-8c15-afbb30b57a37" path="/var/lib/kubelet/pods/ce304d33-4bf0-4446-8c15-afbb30b57a37/volumes" Mar 13 00:38:10.666085 containerd[1547]: time="2026-03-13T00:38:10.665937638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:10.667126 containerd[1547]: time="2026-03-13T00:38:10.666975848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:38:10.667613 containerd[1547]: time="2026-03-13T00:38:10.667580418Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:10.669360 containerd[1547]: time="2026-03-13T00:38:10.669281778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:10.669997 containerd[1547]: time="2026-03-13T00:38:10.669966858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.42558519s" Mar 13 00:38:10.670070 containerd[1547]: time="2026-03-13T00:38:10.670056328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:38:10.674423 containerd[1547]: time="2026-03-13T00:38:10.674380168Z" level=info msg="CreateContainer within sandbox \"3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:38:10.686169 containerd[1547]: time="2026-03-13T00:38:10.684745838Z" level=info msg="Container a56818f62295890f9ec157aaa4f444975d9f156f01da2685d7a01ae0cc884489: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:10.685865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638276402.mount: Deactivated successfully. Mar 13 00:38:10.700682 containerd[1547]: time="2026-03-13T00:38:10.700638968Z" level=info msg="CreateContainer within sandbox \"3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a56818f62295890f9ec157aaa4f444975d9f156f01da2685d7a01ae0cc884489\"" Mar 13 00:38:10.701420 containerd[1547]: time="2026-03-13T00:38:10.701389088Z" level=info msg="StartContainer for \"a56818f62295890f9ec157aaa4f444975d9f156f01da2685d7a01ae0cc884489\"" Mar 13 00:38:10.703304 containerd[1547]: time="2026-03-13T00:38:10.703192378Z" level=info msg="connecting to shim a56818f62295890f9ec157aaa4f444975d9f156f01da2685d7a01ae0cc884489" address="unix:///run/containerd/s/4c708e2361ce43f6c4c71d385a4a53b2e0f57ad94e2ef86b53b29ea33e9294ec" protocol=ttrpc version=3 Mar 13 00:38:10.729391 systemd[1]: Started cri-containerd-a56818f62295890f9ec157aaa4f444975d9f156f01da2685d7a01ae0cc884489.scope - libcontainer container a56818f62295890f9ec157aaa4f444975d9f156f01da2685d7a01ae0cc884489. Mar 13 00:38:10.789457 containerd[1547]: time="2026-03-13T00:38:10.789405728Z" level=info msg="StartContainer for \"a56818f62295890f9ec157aaa4f444975d9f156f01da2685d7a01ae0cc884489\" returns successfully" Mar 13 00:38:10.792900 containerd[1547]: time="2026-03-13T00:38:10.792846978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:38:11.000916 systemd-networkd[1427]: calif52094d4f6a: Gained IPv6LL Mar 13 00:38:11.640448 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Mar 13 00:38:11.674413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376347000.mount: Deactivated successfully. Mar 13 00:38:11.688294 containerd[1547]: time="2026-03-13T00:38:11.687610678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:11.688294 containerd[1547]: time="2026-03-13T00:38:11.688241408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:38:11.688753 containerd[1547]: time="2026-03-13T00:38:11.688732368Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:11.689954 containerd[1547]: time="2026-03-13T00:38:11.689933658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:11.690593 containerd[1547]: time="2026-03-13T00:38:11.690572998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 897.67559ms" Mar 13 00:38:11.690668 containerd[1547]: time="2026-03-13T00:38:11.690654738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:38:11.694161 containerd[1547]: time="2026-03-13T00:38:11.694131148Z" level=info msg="CreateContainer within sandbox \"3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:38:11.698408 containerd[1547]: time="2026-03-13T00:38:11.698373218Z" level=info msg="Container 6f46a4016f003fd7355c61d03f4cfebf23ec658483404ccece69f3a72f41db3b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:11.713913 containerd[1547]: time="2026-03-13T00:38:11.713889998Z" level=info msg="CreateContainer within sandbox \"3f76fae54e63607656cf0c9f1e8ad0b51a6476babfb246b6e492c1b2ab586471\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6f46a4016f003fd7355c61d03f4cfebf23ec658483404ccece69f3a72f41db3b\"" Mar 13 00:38:11.714569 containerd[1547]: time="2026-03-13T00:38:11.714550458Z" level=info msg="StartContainer for \"6f46a4016f003fd7355c61d03f4cfebf23ec658483404ccece69f3a72f41db3b\"" Mar 13 00:38:11.715840 containerd[1547]: time="2026-03-13T00:38:11.715472878Z" level=info msg="connecting to shim 6f46a4016f003fd7355c61d03f4cfebf23ec658483404ccece69f3a72f41db3b" address="unix:///run/containerd/s/4c708e2361ce43f6c4c71d385a4a53b2e0f57ad94e2ef86b53b29ea33e9294ec" protocol=ttrpc version=3 Mar 13 00:38:11.743422 systemd[1]: Started cri-containerd-6f46a4016f003fd7355c61d03f4cfebf23ec658483404ccece69f3a72f41db3b.scope - libcontainer container 6f46a4016f003fd7355c61d03f4cfebf23ec658483404ccece69f3a72f41db3b. Mar 13 00:38:11.801102 containerd[1547]: time="2026-03-13T00:38:11.801064548Z" level=info msg="StartContainer for \"6f46a4016f003fd7355c61d03f4cfebf23ec658483404ccece69f3a72f41db3b\" returns successfully" Mar 13 00:38:12.539599 kubelet[2724]: I0313 00:38:12.539257 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8c76f46f8-cblqt" podStartSLOduration=2.091644828 podStartE2EDuration="4.539240018s" podCreationTimestamp="2026-03-13 00:38:08 +0000 UTC" firstStartedPulling="2026-03-13 00:38:09.243905258 +0000 UTC m=+30.948586971" lastFinishedPulling="2026-03-13 00:38:11.691500448 +0000 UTC m=+33.396182161" observedRunningTime="2026-03-13 00:38:12.538893878 +0000 UTC m=+34.243575591" watchObservedRunningTime="2026-03-13 00:38:12.539240018 +0000 UTC m=+34.243921731" Mar 13 00:38:18.386098 kubelet[2724]: E0313 00:38:18.384384 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:18.388356 containerd[1547]: time="2026-03-13T00:38:18.387014938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwppt,Uid:93729285-a189-4abc-ba2b-049bef976bb3,Namespace:kube-system,Attempt:0,}" Mar 13 00:38:18.389023 containerd[1547]: time="2026-03-13T00:38:18.388582888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npsmc,Uid:53df9c22-fa7e-4b17-8b53-e9ef874e7bac,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:18.525724 systemd-networkd[1427]: cali18726c7dfb5: Link UP Mar 13 00:38:18.528644 systemd-networkd[1427]: cali18726c7dfb5: Gained carrier Mar 13 00:38:18.566111 containerd[1547]: 2026-03-13 00:38:18.454 [INFO][4140] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0 coredns-674b8bbfcf- kube-system 93729285-a189-4abc-ba2b-049bef976bb3 887 0 2026-03-13 00:37:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-197-95 coredns-674b8bbfcf-wwppt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18726c7dfb5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-" Mar 13 00:38:18.566111 containerd[1547]: 2026-03-13 00:38:18.456 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" Mar 13 00:38:18.566111 containerd[1547]: 2026-03-13 00:38:18.487 [INFO][4165] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" HandleID="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Workload="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.495 [INFO][4165] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" HandleID="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Workload="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-197-95", "pod":"coredns-674b8bbfcf-wwppt", "timestamp":"2026-03-13 00:38:18.487622478 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000295340)} Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.495 [INFO][4165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.495 [INFO][4165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.495 [INFO][4165] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.498 [INFO][4165] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" host="172-234-197-95" Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.502 [INFO][4165] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.506 [INFO][4165] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.507 [INFO][4165] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:18.566406 containerd[1547]: 2026-03-13 00:38:18.509 [INFO][4165] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:18.566744 containerd[1547]: 2026-03-13 00:38:18.509 [INFO][4165] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" host="172-234-197-95" Mar 13 00:38:18.566744 containerd[1547]: 2026-03-13 00:38:18.510 [INFO][4165] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730 Mar 13 00:38:18.566744 containerd[1547]: 2026-03-13 00:38:18.514 [INFO][4165] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" host="172-234-197-95" Mar 13 00:38:18.566744 containerd[1547]: 2026-03-13 00:38:18.518 [INFO][4165] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.66/26] block=192.168.79.64/26 handle="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" host="172-234-197-95" Mar 13 00:38:18.566744 containerd[1547]: 2026-03-13 00:38:18.518 [INFO][4165] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.66/26] handle="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" host="172-234-197-95" Mar 13 00:38:18.566744 containerd[1547]: 2026-03-13 00:38:18.518 [INFO][4165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:18.566744 containerd[1547]: 2026-03-13 00:38:18.518 [INFO][4165] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.66/26] IPv6=[] ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" HandleID="k8s-pod-network.b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Workload="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" Mar 13 00:38:18.567091 containerd[1547]: 2026-03-13 00:38:18.521 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"93729285-a189-4abc-ba2b-049bef976bb3", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"coredns-674b8bbfcf-wwppt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18726c7dfb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:18.567091 containerd[1547]: 2026-03-13 00:38:18.521 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.66/32] ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" Mar 13 00:38:18.567091 containerd[1547]: 2026-03-13 00:38:18.521 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18726c7dfb5 ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" Mar 13 00:38:18.567091 containerd[1547]: 2026-03-13 00:38:18.528 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" Mar 13 00:38:18.567091 containerd[1547]: 2026-03-13 00:38:18.530 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"93729285-a189-4abc-ba2b-049bef976bb3", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730", Pod:"coredns-674b8bbfcf-wwppt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18726c7dfb5", MAC:"02:6c:41:88:bc:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:18.567091 containerd[1547]: 2026-03-13 00:38:18.557 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwppt" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--wwppt-eth0" Mar 13 00:38:18.616306 kubelet[2724]: I0313 00:38:18.613240 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:38:18.656141 containerd[1547]: time="2026-03-13T00:38:18.655974568Z" level=info msg="connecting to shim b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730" address="unix:///run/containerd/s/4f4af8ce39830a33f9b2db96ae23aa9b39226f552bc1b03c97e9d0e41c48815e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:18.715106 systemd-networkd[1427]: cali8c497d48a89: Link UP Mar 13 00:38:18.717402 systemd-networkd[1427]: cali8c497d48a89: Gained carrier Mar 13 00:38:18.731739 systemd[1]: Started cri-containerd-b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730.scope - libcontainer container b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730. Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.457 [INFO][4144] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-csi--node--driver--npsmc-eth0 csi-node-driver- calico-system 53df9c22-fa7e-4b17-8b53-e9ef874e7bac 769 0 2026-03-13 00:37:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-197-95 csi-node-driver-npsmc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8c497d48a89 [] [] }} ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.458 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.491 [INFO][4170] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" HandleID="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Workload="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.499 [INFO][4170] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" HandleID="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Workload="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-197-95", "pod":"csi-node-driver-npsmc", "timestamp":"2026-03-13 00:38:18.491899218 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.499 [INFO][4170] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.518 [INFO][4170] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.518 [INFO][4170] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.600 [INFO][4170] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.650 [INFO][4170] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.657 [INFO][4170] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.658 [INFO][4170] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.661 [INFO][4170] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.661 [INFO][4170] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.666 [INFO][4170] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.679 [INFO][4170] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.688 [INFO][4170] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.67/26] block=192.168.79.64/26 handle="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.688 [INFO][4170] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.67/26] handle="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" host="172-234-197-95" Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.688 [INFO][4170] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:18.750111 containerd[1547]: 2026-03-13 00:38:18.688 [INFO][4170] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.67/26] IPv6=[] ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" HandleID="k8s-pod-network.917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Workload="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" Mar 13 00:38:18.751607 containerd[1547]: 2026-03-13 00:38:18.698 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-csi--node--driver--npsmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53df9c22-fa7e-4b17-8b53-e9ef874e7bac", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"csi-node-driver-npsmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c497d48a89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:18.751607 containerd[1547]: 2026-03-13 00:38:18.700 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.67/32] ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" Mar 13 00:38:18.751607 containerd[1547]: 2026-03-13 00:38:18.704 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c497d48a89 ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" Mar 13 00:38:18.751607 containerd[1547]: 2026-03-13 00:38:18.727 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" Mar 13 00:38:18.751607 containerd[1547]: 2026-03-13 00:38:18.729 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-csi--node--driver--npsmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"53df9c22-fa7e-4b17-8b53-e9ef874e7bac", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed", Pod:"csi-node-driver-npsmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c497d48a89", MAC:"5e:c5:c4:6e:82:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:18.751607 containerd[1547]: 2026-03-13 00:38:18.746 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" Namespace="calico-system" Pod="csi-node-driver-npsmc" WorkloadEndpoint="172--234--197--95-k8s-csi--node--driver--npsmc-eth0" Mar 13 00:38:18.814464 containerd[1547]: time="2026-03-13T00:38:18.814430678Z" level=info msg="connecting to shim 917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed" address="unix:///run/containerd/s/d0d3bd503c2ec734bba5e771d2e8ddf41a85b50f0781335c010a0471bd6a01a5" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:18.855627 containerd[1547]: time="2026-03-13T00:38:18.855592038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwppt,Uid:93729285-a189-4abc-ba2b-049bef976bb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730\"" Mar 13 00:38:18.858028 kubelet[2724]: E0313 00:38:18.857875 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:18.858566 systemd[1]: Started cri-containerd-917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed.scope - libcontainer container 917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed. Mar 13 00:38:18.867321 containerd[1547]: time="2026-03-13T00:38:18.866898738Z" level=info msg="CreateContainer within sandbox \"b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:38:18.883690 containerd[1547]: time="2026-03-13T00:38:18.883662728Z" level=info msg="Container 9b3343096537a461989c63475a8d9cd2aa4d3ef5199e51b5b3c18e60daf251f9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:18.888757 containerd[1547]: time="2026-03-13T00:38:18.888619778Z" level=info msg="CreateContainer within sandbox \"b878295078880e0ad07090cae05530c9575d91a36319754be316a4dc3d91b730\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b3343096537a461989c63475a8d9cd2aa4d3ef5199e51b5b3c18e60daf251f9\"" Mar 13 00:38:18.889531 containerd[1547]: time="2026-03-13T00:38:18.889506878Z" level=info msg="StartContainer for \"9b3343096537a461989c63475a8d9cd2aa4d3ef5199e51b5b3c18e60daf251f9\"" Mar 13 00:38:18.890208 containerd[1547]: time="2026-03-13T00:38:18.890187368Z" level=info msg="connecting to shim 9b3343096537a461989c63475a8d9cd2aa4d3ef5199e51b5b3c18e60daf251f9" address="unix:///run/containerd/s/4f4af8ce39830a33f9b2db96ae23aa9b39226f552bc1b03c97e9d0e41c48815e" protocol=ttrpc version=3 Mar 13 00:38:18.925421 systemd[1]: Started cri-containerd-9b3343096537a461989c63475a8d9cd2aa4d3ef5199e51b5b3c18e60daf251f9.scope - libcontainer container 9b3343096537a461989c63475a8d9cd2aa4d3ef5199e51b5b3c18e60daf251f9. Mar 13 00:38:18.974638 containerd[1547]: time="2026-03-13T00:38:18.974584358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npsmc,Uid:53df9c22-fa7e-4b17-8b53-e9ef874e7bac,Namespace:calico-system,Attempt:0,} returns sandbox id \"917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed\"" Mar 13 00:38:18.976847 containerd[1547]: time="2026-03-13T00:38:18.976652828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:38:18.987643 containerd[1547]: time="2026-03-13T00:38:18.987622878Z" level=info msg="StartContainer for \"9b3343096537a461989c63475a8d9cd2aa4d3ef5199e51b5b3c18e60daf251f9\" returns successfully" Mar 13 00:38:19.385664 containerd[1547]: time="2026-03-13T00:38:19.384841458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbbd89d84-9crw6,Uid:c0e98f25-f75c-4aab-873f-d1379f52de43,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:19.389983 containerd[1547]: time="2026-03-13T00:38:19.389929548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-24wfm,Uid:be52a627-668e-446b-a54a-44e513eacab5,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:19.541586 systemd-networkd[1427]: calid62377f73af: Link UP Mar 13 00:38:19.542839 systemd-networkd[1427]: calid62377f73af: Gained carrier Mar 13 00:38:19.552531 kubelet[2724]: E0313 00:38:19.552492 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.459 [INFO][4387] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0 calico-kube-controllers-7cbbd89d84- calico-system c0e98f25-f75c-4aab-873f-d1379f52de43 886 0 2026-03-13 00:37:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cbbd89d84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-197-95 calico-kube-controllers-7cbbd89d84-9crw6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid62377f73af [] [] }} ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.459 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.490 [INFO][4413] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" HandleID="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Workload="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.498 [INFO][4413] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" HandleID="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Workload="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-197-95", "pod":"calico-kube-controllers-7cbbd89d84-9crw6", "timestamp":"2026-03-13 00:38:19.490100558 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000576160)} Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.498 [INFO][4413] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.498 [INFO][4413] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.498 [INFO][4413] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.504 [INFO][4413] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.510 [INFO][4413] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.514 [INFO][4413] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.516 [INFO][4413] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.517 [INFO][4413] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.518 [INFO][4413] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.519 [INFO][4413] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014 Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.523 [INFO][4413] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.529 [INFO][4413] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.68/26] block=192.168.79.64/26 handle="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.530 [INFO][4413] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.68/26] handle="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" host="172-234-197-95" Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.530 [INFO][4413] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:19.571217 containerd[1547]: 2026-03-13 00:38:19.530 [INFO][4413] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.68/26] IPv6=[] ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" HandleID="k8s-pod-network.960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Workload="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" Mar 13 00:38:19.571862 containerd[1547]: 2026-03-13 00:38:19.533 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0", GenerateName:"calico-kube-controllers-7cbbd89d84-", Namespace:"calico-system", SelfLink:"", UID:"c0e98f25-f75c-4aab-873f-d1379f52de43", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cbbd89d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"calico-kube-controllers-7cbbd89d84-9crw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid62377f73af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:19.571862 containerd[1547]: 2026-03-13 00:38:19.534 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.68/32] ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" Mar 13 00:38:19.571862 containerd[1547]: 2026-03-13 00:38:19.534 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid62377f73af ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" Mar 13 00:38:19.571862 containerd[1547]: 2026-03-13 00:38:19.541 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" Mar 13 00:38:19.571862 containerd[1547]: 2026-03-13 00:38:19.542 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0", GenerateName:"calico-kube-controllers-7cbbd89d84-", Namespace:"calico-system", SelfLink:"", UID:"c0e98f25-f75c-4aab-873f-d1379f52de43", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cbbd89d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014", Pod:"calico-kube-controllers-7cbbd89d84-9crw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid62377f73af", MAC:"2e:da:31:b4:e4:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:19.571862 containerd[1547]: 2026-03-13 00:38:19.566 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" Namespace="calico-system" Pod="calico-kube-controllers-7cbbd89d84-9crw6" WorkloadEndpoint="172--234--197--95-k8s-calico--kube--controllers--7cbbd89d84--9crw6-eth0" Mar 13 00:38:19.612367 kubelet[2724]: I0313 00:38:19.611145 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wwppt" podStartSLOduration=34.611128298 podStartE2EDuration="34.611128298s" podCreationTimestamp="2026-03-13 00:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:38:19.584474688 +0000 UTC m=+41.289156401" watchObservedRunningTime="2026-03-13 00:38:19.611128298 +0000 UTC m=+41.315810021" Mar 13 00:38:19.614627 containerd[1547]: time="2026-03-13T00:38:19.614582838Z" level=info msg="connecting to shim 960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014" address="unix:///run/containerd/s/bf0ed0622b9c82565456cd42f724e09f3f8282a2655832b147b3f5d8cc68f2d8" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:19.662591 systemd[1]: Started cri-containerd-960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014.scope - libcontainer container 960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014. Mar 13 00:38:19.692301 systemd-networkd[1427]: cali1c5da0fa722: Link UP Mar 13 00:38:19.705345 systemd-networkd[1427]: cali1c5da0fa722: Gained carrier Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.454 [INFO][4391] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0 calico-apiserver-5b66459b76- calico-system be52a627-668e-446b-a54a-44e513eacab5 888 0 2026-03-13 00:37:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b66459b76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-197-95 calico-apiserver-5b66459b76-24wfm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1c5da0fa722 [] [] }} ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.454 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.498 [INFO][4411] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" HandleID="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Workload="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.506 [INFO][4411] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" HandleID="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Workload="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efab0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-197-95", "pod":"calico-apiserver-5b66459b76-24wfm", "timestamp":"2026-03-13 00:38:19.498367218 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112dc0)} Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.506 [INFO][4411] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.530 [INFO][4411] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.530 [INFO][4411] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.602 [INFO][4411] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.622 [INFO][4411] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.628 [INFO][4411] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.636 [INFO][4411] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.641 [INFO][4411] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.642 [INFO][4411] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.644 [INFO][4411] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153 Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.649 [INFO][4411] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.660 [INFO][4411] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.69/26] block=192.168.79.64/26 handle="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.661 [INFO][4411] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.69/26] handle="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" host="172-234-197-95" Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.661 [INFO][4411] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:19.731535 containerd[1547]: 2026-03-13 00:38:19.661 [INFO][4411] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.69/26] IPv6=[] ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" HandleID="k8s-pod-network.5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Workload="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" Mar 13 00:38:19.732034 containerd[1547]: 2026-03-13 00:38:19.671 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0", GenerateName:"calico-apiserver-5b66459b76-", Namespace:"calico-system", SelfLink:"", UID:"be52a627-668e-446b-a54a-44e513eacab5", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b66459b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"calico-apiserver-5b66459b76-24wfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c5da0fa722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:19.732034 containerd[1547]: 2026-03-13 00:38:19.671 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.69/32] ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" Mar 13 00:38:19.732034 containerd[1547]: 2026-03-13 00:38:19.671 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c5da0fa722 ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" Mar 13 00:38:19.732034 containerd[1547]: 2026-03-13 00:38:19.712 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" Mar 13 00:38:19.732034 containerd[1547]: 2026-03-13 00:38:19.713 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0", GenerateName:"calico-apiserver-5b66459b76-", Namespace:"calico-system", SelfLink:"", UID:"be52a627-668e-446b-a54a-44e513eacab5", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b66459b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153", Pod:"calico-apiserver-5b66459b76-24wfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c5da0fa722", MAC:"52:19:f6:08:83:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:19.732034 containerd[1547]: 2026-03-13 00:38:19.722 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-24wfm" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--24wfm-eth0" Mar 13 00:38:19.774520 containerd[1547]: time="2026-03-13T00:38:19.774437978Z" level=info msg="connecting to shim 5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153" address="unix:///run/containerd/s/923448122d8b2ebf396c228ea7d803ef7e859e7782c38ea2dfc3f11b8c460051" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:19.826084 systemd[1]: Started cri-containerd-5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153.scope - libcontainer container 5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153. Mar 13 00:38:19.831171 containerd[1547]: time="2026-03-13T00:38:19.831119218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbbd89d84-9crw6,Uid:c0e98f25-f75c-4aab-873f-d1379f52de43,Namespace:calico-system,Attempt:0,} returns sandbox id \"960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014\"" Mar 13 00:38:19.909074 containerd[1547]: time="2026-03-13T00:38:19.908900878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-24wfm,Uid:be52a627-668e-446b-a54a-44e513eacab5,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153\"" Mar 13 00:38:19.960419 systemd-networkd[1427]: cali18726c7dfb5: Gained IPv6LL Mar 13 00:38:20.389853 containerd[1547]: time="2026-03-13T00:38:20.389805198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tsmbw,Uid:dd18cfae-69a6-47ae-8e71-672f8d18f675,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:20.511932 systemd-networkd[1427]: calid78f7071cd1: Link UP Mar 13 00:38:20.514526 systemd-networkd[1427]: calid78f7071cd1: Gained carrier Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.438 [INFO][4555] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0 goldmane-5b85766d88- calico-system dd18cfae-69a6-47ae-8e71-672f8d18f675 884 0 2026-03-13 00:37:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-197-95 goldmane-5b85766d88-tsmbw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid78f7071cd1 [] [] }} ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.439 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.468 [INFO][4577] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" HandleID="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Workload="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.477 [INFO][4577] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" HandleID="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Workload="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-197-95", "pod":"goldmane-5b85766d88-tsmbw", "timestamp":"2026-03-13 00:38:20.468839818 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003871e0)} Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.477 [INFO][4577] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.477 [INFO][4577] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.477 [INFO][4577] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.480 [INFO][4577] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.484 [INFO][4577] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.489 [INFO][4577] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.492 [INFO][4577] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.494 [INFO][4577] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.494 [INFO][4577] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.496 [INFO][4577] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.500 [INFO][4577] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.505 [INFO][4577] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.70/26] block=192.168.79.64/26 handle="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.505 [INFO][4577] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.70/26] handle="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" host="172-234-197-95" Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.505 [INFO][4577] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:20.533285 containerd[1547]: 2026-03-13 00:38:20.505 [INFO][4577] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.70/26] IPv6=[] ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" HandleID="k8s-pod-network.f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Workload="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" Mar 13 00:38:20.534983 containerd[1547]: 2026-03-13 00:38:20.508 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"dd18cfae-69a6-47ae-8e71-672f8d18f675", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"goldmane-5b85766d88-tsmbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid78f7071cd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:20.534983 containerd[1547]: 2026-03-13 00:38:20.508 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.70/32] ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" Mar 13 00:38:20.534983 containerd[1547]: 2026-03-13 00:38:20.508 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid78f7071cd1 ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" Mar 13 00:38:20.534983 containerd[1547]: 2026-03-13 00:38:20.515 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" Mar 13 00:38:20.534983 containerd[1547]: 2026-03-13 00:38:20.517 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"dd18cfae-69a6-47ae-8e71-672f8d18f675", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c", Pod:"goldmane-5b85766d88-tsmbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid78f7071cd1", MAC:"a6:d6:28:dc:31:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:20.534983 containerd[1547]: 2026-03-13 00:38:20.527 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" Namespace="calico-system" Pod="goldmane-5b85766d88-tsmbw" WorkloadEndpoint="172--234--197--95-k8s-goldmane--5b85766d88--tsmbw-eth0" Mar 13 00:38:20.562564 kubelet[2724]: E0313 00:38:20.562543 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:20.572325 containerd[1547]: time="2026-03-13T00:38:20.571084968Z" level=info msg="connecting to shim f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c" address="unix:///run/containerd/s/d388aec8d220103391c9ba9a2c15fb55fe21575a281e0349a4e58b4e2c4c9b71" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:20.610391 systemd[1]: Started cri-containerd-f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c.scope - libcontainer container f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c. Mar 13 00:38:20.664787 systemd-networkd[1427]: cali8c497d48a89: Gained IPv6LL Mar 13 00:38:20.683252 containerd[1547]: time="2026-03-13T00:38:20.683159338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tsmbw,Uid:dd18cfae-69a6-47ae-8e71-672f8d18f675,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c\"" Mar 13 00:38:20.836666 containerd[1547]: time="2026-03-13T00:38:20.835932148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:20.836666 containerd[1547]: time="2026-03-13T00:38:20.836634298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:38:20.836999 containerd[1547]: time="2026-03-13T00:38:20.836979048Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:20.839366 containerd[1547]: time="2026-03-13T00:38:20.839336868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:20.840030 containerd[1547]: time="2026-03-13T00:38:20.840010108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.86268374s" Mar 13 00:38:20.840095 containerd[1547]: time="2026-03-13T00:38:20.840081768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:38:20.842871 containerd[1547]: time="2026-03-13T00:38:20.842833698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:38:20.844174 containerd[1547]: time="2026-03-13T00:38:20.844155188Z" level=info msg="CreateContainer within sandbox \"917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:38:20.855738 containerd[1547]: time="2026-03-13T00:38:20.855711718Z" level=info msg="Container 7514a9728f1720cfe72f6505f4b0bf8f432b65646d143bbd55b27972d8ad6f1e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:20.874953 containerd[1547]: time="2026-03-13T00:38:20.874906398Z" level=info msg="CreateContainer within sandbox \"917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7514a9728f1720cfe72f6505f4b0bf8f432b65646d143bbd55b27972d8ad6f1e\"" Mar 13 00:38:20.875485 containerd[1547]: time="2026-03-13T00:38:20.875459618Z" level=info msg="StartContainer for \"7514a9728f1720cfe72f6505f4b0bf8f432b65646d143bbd55b27972d8ad6f1e\"" Mar 13 00:38:20.876861 containerd[1547]: time="2026-03-13T00:38:20.876833348Z" level=info msg="connecting to shim 7514a9728f1720cfe72f6505f4b0bf8f432b65646d143bbd55b27972d8ad6f1e" address="unix:///run/containerd/s/d0d3bd503c2ec734bba5e771d2e8ddf41a85b50f0781335c010a0471bd6a01a5" protocol=ttrpc version=3 Mar 13 00:38:20.899406 systemd[1]: Started cri-containerd-7514a9728f1720cfe72f6505f4b0bf8f432b65646d143bbd55b27972d8ad6f1e.scope - libcontainer container 7514a9728f1720cfe72f6505f4b0bf8f432b65646d143bbd55b27972d8ad6f1e. Mar 13 00:38:20.920511 systemd-networkd[1427]: calid62377f73af: Gained IPv6LL Mar 13 00:38:20.995543 containerd[1547]: time="2026-03-13T00:38:20.995430578Z" level=info msg="StartContainer for \"7514a9728f1720cfe72f6505f4b0bf8f432b65646d143bbd55b27972d8ad6f1e\" returns successfully" Mar 13 00:38:21.432403 systemd-networkd[1427]: cali1c5da0fa722: Gained IPv6LL Mar 13 00:38:21.569563 kubelet[2724]: E0313 00:38:21.569446 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:21.752555 systemd-networkd[1427]: calid78f7071cd1: Gained IPv6LL Mar 13 00:38:22.306473 containerd[1547]: time="2026-03-13T00:38:22.306388275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:22.307567 containerd[1547]: time="2026-03-13T00:38:22.307257965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:38:22.308127 containerd[1547]: time="2026-03-13T00:38:22.308078849Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:22.310116 containerd[1547]: time="2026-03-13T00:38:22.310082962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:22.310939 containerd[1547]: time="2026-03-13T00:38:22.310882903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.468012724s" Mar 13 00:38:22.311021 containerd[1547]: time="2026-03-13T00:38:22.311005228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:38:22.312077 containerd[1547]: time="2026-03-13T00:38:22.312049420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:38:22.332898 containerd[1547]: time="2026-03-13T00:38:22.332866439Z" level=info msg="CreateContainer within sandbox \"960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:38:22.337826 containerd[1547]: time="2026-03-13T00:38:22.337795102Z" level=info msg="Container bcad40efbbf666f010f6c2e33724242f4669a5a4a97844c2908cf3c121293df9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:22.346002 containerd[1547]: time="2026-03-13T00:38:22.345969584Z" level=info msg="CreateContainer within sandbox \"960e2b68741b41941d4b48bce68634453a1b630a31804d829e990a246de4b014\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bcad40efbbf666f010f6c2e33724242f4669a5a4a97844c2908cf3c121293df9\"" Mar 13 00:38:22.346650 containerd[1547]: time="2026-03-13T00:38:22.346582532Z" level=info msg="StartContainer for \"bcad40efbbf666f010f6c2e33724242f4669a5a4a97844c2908cf3c121293df9\"" Mar 13 00:38:22.347806 containerd[1547]: time="2026-03-13T00:38:22.347703073Z" level=info msg="connecting to shim bcad40efbbf666f010f6c2e33724242f4669a5a4a97844c2908cf3c121293df9" address="unix:///run/containerd/s/bf0ed0622b9c82565456cd42f724e09f3f8282a2655832b147b3f5d8cc68f2d8" protocol=ttrpc version=3 Mar 13 00:38:22.373409 systemd[1]: Started cri-containerd-bcad40efbbf666f010f6c2e33724242f4669a5a4a97844c2908cf3c121293df9.scope - libcontainer container bcad40efbbf666f010f6c2e33724242f4669a5a4a97844c2908cf3c121293df9. Mar 13 00:38:22.385642 containerd[1547]: time="2026-03-13T00:38:22.385008954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-p78zh,Uid:afd087c8-1a42-49f5-ab0b-7984fdd56d7a,Namespace:calico-system,Attempt:0,}" Mar 13 00:38:22.389305 kubelet[2724]: E0313 00:38:22.388689 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:22.390181 containerd[1547]: time="2026-03-13T00:38:22.390135912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r7txz,Uid:69732a13-74fe-417e-ae4a-16962aedcd17,Namespace:kube-system,Attempt:0,}" Mar 13 00:38:22.505625 containerd[1547]: time="2026-03-13T00:38:22.505595863Z" level=info msg="StartContainer for \"bcad40efbbf666f010f6c2e33724242f4669a5a4a97844c2908cf3c121293df9\" returns successfully" Mar 13 00:38:22.598999 systemd-networkd[1427]: cali370eb732036: Link UP Mar 13 00:38:22.601384 systemd-networkd[1427]: cali370eb732036: Gained carrier Mar 13 00:38:22.606327 kubelet[2724]: I0313 00:38:22.603262 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cbbd89d84-9crw6" podStartSLOduration=23.12486067 podStartE2EDuration="25.603241864s" podCreationTimestamp="2026-03-13 00:37:57 +0000 UTC" firstStartedPulling="2026-03-13 00:38:19.833520068 +0000 UTC m=+41.538201781" lastFinishedPulling="2026-03-13 00:38:22.311901262 +0000 UTC m=+44.016582975" observedRunningTime="2026-03-13 00:38:22.59988582 +0000 UTC m=+44.304567553" watchObservedRunningTime="2026-03-13 00:38:22.603241864 +0000 UTC m=+44.307923577" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.462 [INFO][4742] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0 coredns-674b8bbfcf- kube-system 69732a13-74fe-417e-ae4a-16962aedcd17 890 0 2026-03-13 00:37:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-197-95 coredns-674b8bbfcf-r7txz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali370eb732036 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.462 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.534 [INFO][4756] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" HandleID="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Workload="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.544 [INFO][4756] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" HandleID="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Workload="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00045c9d0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-197-95", "pod":"coredns-674b8bbfcf-r7txz", "timestamp":"2026-03-13 00:38:22.534904624 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000206580)} Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.544 [INFO][4756] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.544 [INFO][4756] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.544 [INFO][4756] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.547 [INFO][4756] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.553 [INFO][4756] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.558 [INFO][4756] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.560 [INFO][4756] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.563 [INFO][4756] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.563 [INFO][4756] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.565 [INFO][4756] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.574 [INFO][4756] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.588 [INFO][4756] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.71/26] block=192.168.79.64/26 handle="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.588 [INFO][4756] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.71/26] handle="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" host="172-234-197-95" Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.588 [INFO][4756] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:22.636761 containerd[1547]: 2026-03-13 00:38:22.588 [INFO][4756] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.71/26] IPv6=[] ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" HandleID="k8s-pod-network.27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Workload="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" Mar 13 00:38:22.638706 containerd[1547]: 2026-03-13 00:38:22.592 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69732a13-74fe-417e-ae4a-16962aedcd17", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"coredns-674b8bbfcf-r7txz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali370eb732036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:22.638706 containerd[1547]: 2026-03-13 00:38:22.592 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.71/32] ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" Mar 13 00:38:22.638706 containerd[1547]: 2026-03-13 00:38:22.592 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali370eb732036 ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" Mar 13 00:38:22.638706 containerd[1547]: 2026-03-13 00:38:22.601 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" Mar 13 00:38:22.638706 containerd[1547]: 2026-03-13 00:38:22.604 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69732a13-74fe-417e-ae4a-16962aedcd17", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a", Pod:"coredns-674b8bbfcf-r7txz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali370eb732036", MAC:"62:d6:bb:48:11:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:22.638706 containerd[1547]: 2026-03-13 00:38:22.625 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" Namespace="kube-system" Pod="coredns-674b8bbfcf-r7txz" WorkloadEndpoint="172--234--197--95-k8s-coredns--674b8bbfcf--r7txz-eth0" Mar 13 00:38:22.671124 containerd[1547]: time="2026-03-13T00:38:22.671062329Z" level=info msg="connecting to shim 27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a" address="unix:///run/containerd/s/94ad64648bdafb6978a92f27c5e58c5f704979f1ceb9e111839e69dfcb877be4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:22.722961 systemd[1]: Started cri-containerd-27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a.scope - libcontainer container 27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a. Mar 13 00:38:22.749388 systemd-networkd[1427]: calid0ad39a888e: Link UP Mar 13 00:38:22.750556 systemd-networkd[1427]: calid0ad39a888e: Gained carrier Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.480 [INFO][4729] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0 calico-apiserver-5b66459b76- calico-system afd087c8-1a42-49f5-ab0b-7984fdd56d7a 889 0 2026-03-13 00:37:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b66459b76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-197-95 calico-apiserver-5b66459b76-p78zh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid0ad39a888e [] [] }} ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.480 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.535 [INFO][4762] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" HandleID="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Workload="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.545 [INFO][4762] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" HandleID="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Workload="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dd510), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-197-95", "pod":"calico-apiserver-5b66459b76-p78zh", "timestamp":"2026-03-13 00:38:22.535379764 +0000 UTC"}, Hostname:"172-234-197-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f6c60)} Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.545 [INFO][4762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.589 [INFO][4762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.589 [INFO][4762] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-197-95' Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.662 [INFO][4762] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.685 [INFO][4762] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.697 [INFO][4762] ipam/ipam.go 526: Trying affinity for 192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.700 [INFO][4762] ipam/ipam.go 160: Attempting to load block cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.719 [INFO][4762] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.720 [INFO][4762] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.722 [INFO][4762] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.727 [INFO][4762] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.738 [INFO][4762] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.79.72/26] block=192.168.79.64/26 handle="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.738 [INFO][4762] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.79.72/26] handle="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" host="172-234-197-95" Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.738 [INFO][4762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:38:22.778629 containerd[1547]: 2026-03-13 00:38:22.738 [INFO][4762] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.79.72/26] IPv6=[] ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" HandleID="k8s-pod-network.053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Workload="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" Mar 13 00:38:22.779901 containerd[1547]: 2026-03-13 00:38:22.742 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0", GenerateName:"calico-apiserver-5b66459b76-", Namespace:"calico-system", SelfLink:"", UID:"afd087c8-1a42-49f5-ab0b-7984fdd56d7a", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b66459b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"", Pod:"calico-apiserver-5b66459b76-p78zh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid0ad39a888e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:22.779901 containerd[1547]: 2026-03-13 00:38:22.744 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.72/32] ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" Mar 13 00:38:22.779901 containerd[1547]: 2026-03-13 00:38:22.744 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0ad39a888e ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" Mar 13 00:38:22.779901 containerd[1547]: 2026-03-13 00:38:22.746 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" Mar 13 00:38:22.779901 containerd[1547]: 2026-03-13 00:38:22.747 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0", GenerateName:"calico-apiserver-5b66459b76-", Namespace:"calico-system", SelfLink:"", UID:"afd087c8-1a42-49f5-ab0b-7984fdd56d7a", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b66459b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-197-95", ContainerID:"053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f", Pod:"calico-apiserver-5b66459b76-p78zh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid0ad39a888e", MAC:"b2:fd:9e:6a:8c:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:38:22.779901 containerd[1547]: 2026-03-13 00:38:22.772 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" Namespace="calico-system" Pod="calico-apiserver-5b66459b76-p78zh" WorkloadEndpoint="172--234--197--95-k8s-calico--apiserver--5b66459b76--p78zh-eth0" Mar 13 00:38:22.813790 containerd[1547]: time="2026-03-13T00:38:22.813756119Z" level=info msg="connecting to shim 053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f" address="unix:///run/containerd/s/76ec0088dac6e557230dd09c0683a6ee2c5962d6ad73e558688e9e874c87bf28" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:38:22.877710 systemd[1]: Started cri-containerd-053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f.scope - libcontainer container 053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f. Mar 13 00:38:22.937291 containerd[1547]: time="2026-03-13T00:38:22.937102726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r7txz,Uid:69732a13-74fe-417e-ae4a-16962aedcd17,Namespace:kube-system,Attempt:0,} returns sandbox id \"27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a\"" Mar 13 00:38:22.939664 kubelet[2724]: E0313 00:38:22.939627 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:22.945085 containerd[1547]: time="2026-03-13T00:38:22.945060811Z" level=info msg="CreateContainer within sandbox \"27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:38:22.955670 containerd[1547]: time="2026-03-13T00:38:22.955649028Z" level=info msg="Container 5676311bd94d52fc65cedd23eeb6d7b22ab9544df193524c123b5bdecf4a226f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:22.986321 containerd[1547]: time="2026-03-13T00:38:22.986299599Z" level=info msg="CreateContainer within sandbox \"27abde7c76b0ce010e317abea38748ab26c11e08249308b142ebd06a1797760a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5676311bd94d52fc65cedd23eeb6d7b22ab9544df193524c123b5bdecf4a226f\"" Mar 13 00:38:22.987390 containerd[1547]: time="2026-03-13T00:38:22.987349812Z" level=info msg="StartContainer for \"5676311bd94d52fc65cedd23eeb6d7b22ab9544df193524c123b5bdecf4a226f\"" Mar 13 00:38:22.990735 containerd[1547]: time="2026-03-13T00:38:22.990703405Z" level=info msg="connecting to shim 5676311bd94d52fc65cedd23eeb6d7b22ab9544df193524c123b5bdecf4a226f" address="unix:///run/containerd/s/94ad64648bdafb6978a92f27c5e58c5f704979f1ceb9e111839e69dfcb877be4" protocol=ttrpc version=3 Mar 13 00:38:23.029043 systemd[1]: Started cri-containerd-5676311bd94d52fc65cedd23eeb6d7b22ab9544df193524c123b5bdecf4a226f.scope - libcontainer container 5676311bd94d52fc65cedd23eeb6d7b22ab9544df193524c123b5bdecf4a226f. Mar 13 00:38:23.084925 containerd[1547]: time="2026-03-13T00:38:23.084166636Z" level=info msg="StartContainer for \"5676311bd94d52fc65cedd23eeb6d7b22ab9544df193524c123b5bdecf4a226f\" returns successfully" Mar 13 00:38:23.181454 containerd[1547]: time="2026-03-13T00:38:23.181338531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b66459b76-p78zh,Uid:afd087c8-1a42-49f5-ab0b-7984fdd56d7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f\"" Mar 13 00:38:23.585332 kubelet[2724]: E0313 00:38:23.585135 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:23.588525 kubelet[2724]: I0313 00:38:23.588492 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:38:23.929222 systemd-networkd[1427]: cali370eb732036: Gained IPv6LL Mar 13 00:38:24.594177 kubelet[2724]: E0313 00:38:24.594126 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:24.697459 systemd-networkd[1427]: calid0ad39a888e: Gained IPv6LL Mar 13 00:38:25.594917 kubelet[2724]: E0313 00:38:25.594889 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:27.560015 containerd[1547]: time="2026-03-13T00:38:27.559973526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:27.566794 containerd[1547]: time="2026-03-13T00:38:27.566641036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:38:27.567358 containerd[1547]: time="2026-03-13T00:38:27.567327929Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:27.571723 containerd[1547]: time="2026-03-13T00:38:27.571697148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:27.573458 containerd[1547]: time="2026-03-13T00:38:27.573429287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.26133172s" Mar 13 00:38:27.573458 containerd[1547]: time="2026-03-13T00:38:27.573456769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:38:27.575296 containerd[1547]: time="2026-03-13T00:38:27.574883530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:38:27.579666 containerd[1547]: time="2026-03-13T00:38:27.579642385Z" level=info msg="CreateContainer within sandbox \"5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:38:27.587295 containerd[1547]: time="2026-03-13T00:38:27.586664997Z" level=info msg="Container c7f505ecb1b60dedcc06e50f9117e69e86764a0b8ff6571281222d1fcac082f3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:27.593001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591006322.mount: Deactivated successfully. Mar 13 00:38:27.599527 containerd[1547]: time="2026-03-13T00:38:27.599496121Z" level=info msg="CreateContainer within sandbox \"5d8f540d349c8ec6722e2b549396886fba5a01be3bb069744b3961c564c07153\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c7f505ecb1b60dedcc06e50f9117e69e86764a0b8ff6571281222d1fcac082f3\"" Mar 13 00:38:27.601294 containerd[1547]: time="2026-03-13T00:38:27.600394363Z" level=info msg="StartContainer for \"c7f505ecb1b60dedcc06e50f9117e69e86764a0b8ff6571281222d1fcac082f3\"" Mar 13 00:38:27.602310 containerd[1547]: time="2026-03-13T00:38:27.602251613Z" level=info msg="connecting to shim c7f505ecb1b60dedcc06e50f9117e69e86764a0b8ff6571281222d1fcac082f3" address="unix:///run/containerd/s/923448122d8b2ebf396c228ea7d803ef7e859e7782c38ea2dfc3f11b8c460051" protocol=ttrpc version=3 Mar 13 00:38:27.639400 systemd[1]: Started cri-containerd-c7f505ecb1b60dedcc06e50f9117e69e86764a0b8ff6571281222d1fcac082f3.scope - libcontainer container c7f505ecb1b60dedcc06e50f9117e69e86764a0b8ff6571281222d1fcac082f3. Mar 13 00:38:27.736578 containerd[1547]: time="2026-03-13T00:38:27.736543035Z" level=info msg="StartContainer for \"c7f505ecb1b60dedcc06e50f9117e69e86764a0b8ff6571281222d1fcac082f3\" returns successfully" Mar 13 00:38:28.624298 kubelet[2724]: I0313 00:38:28.624178 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r7txz" podStartSLOduration=43.624165926 podStartE2EDuration="43.624165926s" podCreationTimestamp="2026-03-13 00:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:38:23.600589318 +0000 UTC m=+45.305271041" watchObservedRunningTime="2026-03-13 00:38:28.624165926 +0000 UTC m=+50.328847639" Mar 13 00:38:29.199476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502211036.mount: Deactivated successfully. Mar 13 00:38:29.553480 containerd[1547]: time="2026-03-13T00:38:29.553431686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:29.554357 containerd[1547]: time="2026-03-13T00:38:29.554187016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:38:29.554839 containerd[1547]: time="2026-03-13T00:38:29.554811226Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:29.556618 containerd[1547]: time="2026-03-13T00:38:29.556586379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:29.557886 containerd[1547]: time="2026-03-13T00:38:29.557846961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.982935609s" Mar 13 00:38:29.557999 containerd[1547]: time="2026-03-13T00:38:29.557976531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:38:29.560313 containerd[1547]: time="2026-03-13T00:38:29.559525345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:38:29.562487 containerd[1547]: time="2026-03-13T00:38:29.562452491Z" level=info msg="CreateContainer within sandbox \"f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:38:29.570137 containerd[1547]: time="2026-03-13T00:38:29.569528050Z" level=info msg="Container 6ae59f77b0ceac6043d679031fbb9ea78fb4e064a8cfb275c8bf14ec325232f5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:29.581777 containerd[1547]: time="2026-03-13T00:38:29.581735301Z" level=info msg="CreateContainer within sandbox \"f8cd2b8fc4a52e7eba7f925222262f6a1d6dc6ac7ae9b87a3124d89070c12b8c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6ae59f77b0ceac6043d679031fbb9ea78fb4e064a8cfb275c8bf14ec325232f5\"" Mar 13 00:38:29.582957 containerd[1547]: time="2026-03-13T00:38:29.582261003Z" level=info msg="StartContainer for \"6ae59f77b0ceac6043d679031fbb9ea78fb4e064a8cfb275c8bf14ec325232f5\"" Mar 13 00:38:29.583617 containerd[1547]: time="2026-03-13T00:38:29.583586990Z" level=info msg="connecting to shim 6ae59f77b0ceac6043d679031fbb9ea78fb4e064a8cfb275c8bf14ec325232f5" address="unix:///run/containerd/s/d388aec8d220103391c9ba9a2c15fb55fe21575a281e0349a4e58b4e2c4c9b71" protocol=ttrpc version=3 Mar 13 00:38:29.610524 systemd[1]: Started cri-containerd-6ae59f77b0ceac6043d679031fbb9ea78fb4e064a8cfb275c8bf14ec325232f5.scope - libcontainer container 6ae59f77b0ceac6043d679031fbb9ea78fb4e064a8cfb275c8bf14ec325232f5. Mar 13 00:38:29.617917 kubelet[2724]: I0313 00:38:29.617883 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:38:29.675337 containerd[1547]: time="2026-03-13T00:38:29.674966515Z" level=info msg="StartContainer for \"6ae59f77b0ceac6043d679031fbb9ea78fb4e064a8cfb275c8bf14ec325232f5\" returns successfully" Mar 13 00:38:30.334324 containerd[1547]: time="2026-03-13T00:38:30.334231907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:30.335040 containerd[1547]: time="2026-03-13T00:38:30.335007896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:38:30.335563 containerd[1547]: time="2026-03-13T00:38:30.335522655Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:30.337095 containerd[1547]: time="2026-03-13T00:38:30.337059261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:30.337809 containerd[1547]: time="2026-03-13T00:38:30.337774374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 778.215796ms" Mar 13 00:38:30.337809 containerd[1547]: time="2026-03-13T00:38:30.337803897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:38:30.340284 containerd[1547]: time="2026-03-13T00:38:30.340252471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:38:30.343177 containerd[1547]: time="2026-03-13T00:38:30.343158230Z" level=info msg="CreateContainer within sandbox \"917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:38:30.350449 containerd[1547]: time="2026-03-13T00:38:30.350411567Z" level=info msg="Container 548e925e544b86cd5c72e2bef13300eae57d45efe7ebcccc6768b056ac66de39: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:30.360210 containerd[1547]: time="2026-03-13T00:38:30.359653843Z" level=info msg="CreateContainer within sandbox \"917cb57a0fd23acc6afbd9d5a95272732a1dfe27a8994f981053d84995cd1fed\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"548e925e544b86cd5c72e2bef13300eae57d45efe7ebcccc6768b056ac66de39\"" Mar 13 00:38:30.361941 containerd[1547]: time="2026-03-13T00:38:30.361920534Z" level=info msg="StartContainer for \"548e925e544b86cd5c72e2bef13300eae57d45efe7ebcccc6768b056ac66de39\"" Mar 13 00:38:30.363601 containerd[1547]: time="2026-03-13T00:38:30.363574379Z" level=info msg="connecting to shim 548e925e544b86cd5c72e2bef13300eae57d45efe7ebcccc6768b056ac66de39" address="unix:///run/containerd/s/d0d3bd503c2ec734bba5e771d2e8ddf41a85b50f0781335c010a0471bd6a01a5" protocol=ttrpc version=3 Mar 13 00:38:30.393450 systemd[1]: Started cri-containerd-548e925e544b86cd5c72e2bef13300eae57d45efe7ebcccc6768b056ac66de39.scope - libcontainer container 548e925e544b86cd5c72e2bef13300eae57d45efe7ebcccc6768b056ac66de39. Mar 13 00:38:30.474562 containerd[1547]: time="2026-03-13T00:38:30.474524180Z" level=info msg="StartContainer for \"548e925e544b86cd5c72e2bef13300eae57d45efe7ebcccc6768b056ac66de39\" returns successfully" Mar 13 00:38:30.522902 containerd[1547]: time="2026-03-13T00:38:30.522861823Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:38:30.524193 containerd[1547]: time="2026-03-13T00:38:30.524160431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:38:30.525454 containerd[1547]: time="2026-03-13T00:38:30.525430957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 184.957179ms" Mar 13 00:38:30.525496 containerd[1547]: time="2026-03-13T00:38:30.525455178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:38:30.528771 containerd[1547]: time="2026-03-13T00:38:30.528742826Z" level=info msg="CreateContainer within sandbox \"053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:38:30.534341 containerd[1547]: time="2026-03-13T00:38:30.533146518Z" level=info msg="Container 56c1c00626ad9432a4c117dfe28d3897fa02f0e43693fc463e94e97fdc9d2be6: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:30.553559 containerd[1547]: time="2026-03-13T00:38:30.553523344Z" level=info msg="CreateContainer within sandbox \"053ca4afe6a31ccb6ce2c8af77126e1502e37631a0f8be012208b9183579bf7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56c1c00626ad9432a4c117dfe28d3897fa02f0e43693fc463e94e97fdc9d2be6\"" Mar 13 00:38:30.554467 containerd[1547]: time="2026-03-13T00:38:30.554448613Z" level=info msg="StartContainer for \"56c1c00626ad9432a4c117dfe28d3897fa02f0e43693fc463e94e97fdc9d2be6\"" Mar 13 00:38:30.555434 containerd[1547]: time="2026-03-13T00:38:30.555390624Z" level=info msg="connecting to shim 56c1c00626ad9432a4c117dfe28d3897fa02f0e43693fc463e94e97fdc9d2be6" address="unix:///run/containerd/s/76ec0088dac6e557230dd09c0683a6ee2c5962d6ad73e558688e9e874c87bf28" protocol=ttrpc version=3 Mar 13 00:38:30.579379 systemd[1]: Started cri-containerd-56c1c00626ad9432a4c117dfe28d3897fa02f0e43693fc463e94e97fdc9d2be6.scope - libcontainer container 56c1c00626ad9432a4c117dfe28d3897fa02f0e43693fc463e94e97fdc9d2be6. Mar 13 00:38:30.645557 kubelet[2724]: I0313 00:38:30.645459 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-tsmbw" podStartSLOduration=25.771719913 podStartE2EDuration="34.645445771s" podCreationTimestamp="2026-03-13 00:37:56 +0000 UTC" firstStartedPulling="2026-03-13 00:38:20.685187398 +0000 UTC m=+42.389869111" lastFinishedPulling="2026-03-13 00:38:29.558913256 +0000 UTC m=+51.263594969" observedRunningTime="2026-03-13 00:38:30.643289199 +0000 UTC m=+52.347970912" watchObservedRunningTime="2026-03-13 00:38:30.645445771 +0000 UTC m=+52.350127494" Mar 13 00:38:30.647293 kubelet[2724]: I0313 00:38:30.646191 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5b66459b76-24wfm" podStartSLOduration=26.9830514 podStartE2EDuration="34.646181146s" podCreationTimestamp="2026-03-13 00:37:56 +0000 UTC" firstStartedPulling="2026-03-13 00:38:19.910927568 +0000 UTC m=+41.615609281" lastFinishedPulling="2026-03-13 00:38:27.574057314 +0000 UTC m=+49.278739027" observedRunningTime="2026-03-13 00:38:28.625868192 +0000 UTC m=+50.330549905" watchObservedRunningTime="2026-03-13 00:38:30.646181146 +0000 UTC m=+52.350862859" Mar 13 00:38:30.648763 containerd[1547]: time="2026-03-13T00:38:30.648729949Z" level=info msg="StartContainer for \"56c1c00626ad9432a4c117dfe28d3897fa02f0e43693fc463e94e97fdc9d2be6\" returns successfully" Mar 13 00:38:30.661561 kubelet[2724]: I0313 00:38:30.661143 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-npsmc" podStartSLOduration=22.298073747 podStartE2EDuration="33.661134133s" podCreationTimestamp="2026-03-13 00:37:57 +0000 UTC" firstStartedPulling="2026-03-13 00:38:18.975905158 +0000 UTC m=+40.680586871" lastFinishedPulling="2026-03-13 00:38:30.338965534 +0000 UTC m=+52.043647257" observedRunningTime="2026-03-13 00:38:30.660168631 +0000 UTC m=+52.364850354" watchObservedRunningTime="2026-03-13 00:38:30.661134133 +0000 UTC m=+52.365815846" Mar 13 00:38:31.465618 kubelet[2724]: I0313 00:38:31.465520 2724 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:38:31.466941 kubelet[2724]: I0313 00:38:31.466847 2724 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:38:31.650721 kubelet[2724]: I0313 00:38:31.650677 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5b66459b76-p78zh" podStartSLOduration=28.307710526 podStartE2EDuration="35.650664647s" podCreationTimestamp="2026-03-13 00:37:56 +0000 UTC" firstStartedPulling="2026-03-13 00:38:23.182957052 +0000 UTC m=+44.887638765" lastFinishedPulling="2026-03-13 00:38:30.525911173 +0000 UTC m=+52.230592886" observedRunningTime="2026-03-13 00:38:31.648669266 +0000 UTC m=+53.353350979" watchObservedRunningTime="2026-03-13 00:38:31.650664647 +0000 UTC m=+53.355346360" Mar 13 00:38:32.794326 kubelet[2724]: I0313 00:38:32.793597 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:38:35.587319 kubelet[2724]: I0313 00:38:35.587235 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:38:37.199636 kubelet[2724]: I0313 00:38:37.199563 2724 ???:1] "http: TLS handshake error from 205.210.31.215:63382: client sent an HTTP request to an HTTPS server" Mar 13 00:38:39.455522 systemd[1]: Started sshd@7-172.234.197.95:22-34.175.118.185:42916.service - OpenSSH per-connection server daemon (34.175.118.185:42916). Mar 13 00:38:40.699369 sshd[5237]: Invalid user claude from 34.175.118.185 port 42916 Mar 13 00:38:40.946121 sshd[5237]: Received disconnect from 34.175.118.185 port 42916:11: Bye Bye [preauth] Mar 13 00:38:40.946121 sshd[5237]: Disconnected from invalid user claude 34.175.118.185 port 42916 [preauth] Mar 13 00:38:40.948778 systemd[1]: sshd@7-172.234.197.95:22-34.175.118.185:42916.service: Deactivated successfully. Mar 13 00:38:51.385174 kubelet[2724]: E0313 00:38:51.384777 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:57.384892 kubelet[2724]: E0313 00:38:57.384856 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:38:59.946571 systemd[1]: Started sshd@8-172.234.197.95:22-103.63.25.171:51526.service - OpenSSH per-connection server daemon (103.63.25.171:51526). Mar 13 00:39:00.386291 kubelet[2724]: E0313 00:39:00.385857 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:39:01.220474 sshd[5309]: Invalid user kodi from 103.63.25.171 port 51526 Mar 13 00:39:01.465794 sshd[5309]: Received disconnect from 103.63.25.171 port 51526:11: Bye Bye [preauth] Mar 13 00:39:01.465976 sshd[5309]: Disconnected from invalid user kodi 103.63.25.171 port 51526 [preauth] Mar 13 00:39:01.468551 systemd[1]: sshd@8-172.234.197.95:22-103.63.25.171:51526.service: Deactivated successfully. Mar 13 00:39:11.384055 kubelet[2724]: E0313 00:39:11.384004 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:39:23.384858 kubelet[2724]: E0313 00:39:23.384816 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:39:47.384398 kubelet[2724]: E0313 00:39:47.384369 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:39:47.830828 systemd[1]: Started sshd@9-172.234.197.95:22-68.220.241.50:51908.service - OpenSSH per-connection server daemon (68.220.241.50:51908). Mar 13 00:39:48.005660 sshd[5520]: Accepted publickey for core from 68.220.241.50 port 51908 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:48.007809 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:48.013427 systemd-logind[1525]: New session 8 of user core. Mar 13 00:39:48.022472 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:39:48.158057 sshd[5523]: Connection closed by 68.220.241.50 port 51908 Mar 13 00:39:48.158677 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:48.163547 systemd[1]: sshd@9-172.234.197.95:22-68.220.241.50:51908.service: Deactivated successfully. Mar 13 00:39:48.165908 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:39:48.168433 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:39:48.169533 systemd-logind[1525]: Removed session 8. Mar 13 00:39:53.194564 systemd[1]: Started sshd@10-172.234.197.95:22-68.220.241.50:42220.service - OpenSSH per-connection server daemon (68.220.241.50:42220). Mar 13 00:39:53.357602 sshd[5573]: Accepted publickey for core from 68.220.241.50 port 42220 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:53.359440 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:53.364473 systemd-logind[1525]: New session 9 of user core. Mar 13 00:39:53.369409 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:39:53.489429 sshd[5576]: Connection closed by 68.220.241.50 port 42220 Mar 13 00:39:53.489985 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:53.494915 systemd[1]: sshd@10-172.234.197.95:22-68.220.241.50:42220.service: Deactivated successfully. Mar 13 00:39:53.497596 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:39:53.499047 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:39:53.500736 systemd-logind[1525]: Removed session 9. Mar 13 00:39:56.385737 kubelet[2724]: E0313 00:39:56.385331 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:39:58.519527 systemd[1]: Started sshd@11-172.234.197.95:22-68.220.241.50:42234.service - OpenSSH per-connection server daemon (68.220.241.50:42234). Mar 13 00:39:58.661514 sshd[5589]: Accepted publickey for core from 68.220.241.50 port 42234 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:58.663134 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:58.668082 systemd-logind[1525]: New session 10 of user core. Mar 13 00:39:58.672543 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:39:58.786396 sshd[5592]: Connection closed by 68.220.241.50 port 42234 Mar 13 00:39:58.787866 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:58.792030 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:39:58.792998 systemd[1]: sshd@11-172.234.197.95:22-68.220.241.50:42234.service: Deactivated successfully. Mar 13 00:39:58.794978 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:39:58.797466 systemd-logind[1525]: Removed session 10. Mar 13 00:39:58.814988 systemd[1]: Started sshd@12-172.234.197.95:22-68.220.241.50:42236.service - OpenSSH per-connection server daemon (68.220.241.50:42236). Mar 13 00:39:58.963977 sshd[5605]: Accepted publickey for core from 68.220.241.50 port 42236 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:58.965466 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:58.970358 systemd-logind[1525]: New session 11 of user core. Mar 13 00:39:58.975395 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:39:59.195183 sshd[5619]: Connection closed by 68.220.241.50 port 42236 Mar 13 00:39:59.196239 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:59.201556 systemd[1]: sshd@12-172.234.197.95:22-68.220.241.50:42236.service: Deactivated successfully. Mar 13 00:39:59.204704 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:39:59.208866 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:39:59.210500 systemd-logind[1525]: Removed session 11. Mar 13 00:39:59.234139 systemd[1]: Started sshd@13-172.234.197.95:22-68.220.241.50:42246.service - OpenSSH per-connection server daemon (68.220.241.50:42246). Mar 13 00:39:59.399050 sshd[5638]: Accepted publickey for core from 68.220.241.50 port 42246 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:59.400729 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:59.406294 systemd-logind[1525]: New session 12 of user core. Mar 13 00:39:59.411419 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:39:59.536667 sshd[5648]: Connection closed by 68.220.241.50 port 42246 Mar 13 00:39:59.538562 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:59.543898 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:39:59.544370 systemd[1]: sshd@13-172.234.197.95:22-68.220.241.50:42246.service: Deactivated successfully. Mar 13 00:39:59.547684 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:39:59.549702 systemd-logind[1525]: Removed session 12. Mar 13 00:40:04.567750 systemd[1]: Started sshd@14-172.234.197.95:22-68.220.241.50:34706.service - OpenSSH per-connection server daemon (68.220.241.50:34706). Mar 13 00:40:04.715325 sshd[5699]: Accepted publickey for core from 68.220.241.50 port 34706 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:04.716797 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:04.721255 systemd-logind[1525]: New session 13 of user core. Mar 13 00:40:04.726423 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:40:04.834097 sshd[5702]: Connection closed by 68.220.241.50 port 34706 Mar 13 00:40:04.834742 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:04.840025 systemd[1]: sshd@14-172.234.197.95:22-68.220.241.50:34706.service: Deactivated successfully. Mar 13 00:40:04.842394 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:40:04.843286 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:40:04.845185 systemd-logind[1525]: Removed session 13. Mar 13 00:40:04.863117 systemd[1]: Started sshd@15-172.234.197.95:22-68.220.241.50:34720.service - OpenSSH per-connection server daemon (68.220.241.50:34720). Mar 13 00:40:05.013565 sshd[5714]: Accepted publickey for core from 68.220.241.50 port 34720 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:05.015087 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:05.019845 systemd-logind[1525]: New session 14 of user core. Mar 13 00:40:05.027404 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:40:05.316974 sshd[5717]: Connection closed by 68.220.241.50 port 34720 Mar 13 00:40:05.318662 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:05.323761 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:40:05.324568 systemd[1]: sshd@15-172.234.197.95:22-68.220.241.50:34720.service: Deactivated successfully. Mar 13 00:40:05.327226 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:40:05.329503 systemd-logind[1525]: Removed session 14. Mar 13 00:40:05.351586 systemd[1]: Started sshd@16-172.234.197.95:22-68.220.241.50:34732.service - OpenSSH per-connection server daemon (68.220.241.50:34732). Mar 13 00:40:05.512140 sshd[5727]: Accepted publickey for core from 68.220.241.50 port 34732 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:05.513599 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:05.519236 systemd-logind[1525]: New session 15 of user core. Mar 13 00:40:05.522384 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:40:06.089745 sshd[5730]: Connection closed by 68.220.241.50 port 34732 Mar 13 00:40:06.091490 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:06.097457 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:40:06.098531 systemd[1]: sshd@16-172.234.197.95:22-68.220.241.50:34732.service: Deactivated successfully. Mar 13 00:40:06.104342 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:40:06.107144 systemd-logind[1525]: Removed session 15. Mar 13 00:40:06.119680 systemd[1]: Started sshd@17-172.234.197.95:22-68.220.241.50:34740.service - OpenSSH per-connection server daemon (68.220.241.50:34740). Mar 13 00:40:06.271886 sshd[5771]: Accepted publickey for core from 68.220.241.50 port 34740 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:06.273764 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:06.279417 systemd-logind[1525]: New session 16 of user core. Mar 13 00:40:06.285404 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:40:06.485947 sshd[5778]: Connection closed by 68.220.241.50 port 34740 Mar 13 00:40:06.487485 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:06.492496 systemd[1]: sshd@17-172.234.197.95:22-68.220.241.50:34740.service: Deactivated successfully. Mar 13 00:40:06.495212 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:40:06.496723 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:40:06.499093 systemd-logind[1525]: Removed session 16. Mar 13 00:40:06.516907 systemd[1]: Started sshd@18-172.234.197.95:22-68.220.241.50:34754.service - OpenSSH per-connection server daemon (68.220.241.50:34754). Mar 13 00:40:06.661548 sshd[5788]: Accepted publickey for core from 68.220.241.50 port 34754 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:06.663683 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:06.669912 systemd-logind[1525]: New session 17 of user core. Mar 13 00:40:06.674417 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:40:06.784562 sshd[5791]: Connection closed by 68.220.241.50 port 34754 Mar 13 00:40:06.786417 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:06.791056 systemd[1]: sshd@18-172.234.197.95:22-68.220.241.50:34754.service: Deactivated successfully. Mar 13 00:40:06.793672 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:40:06.795166 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:40:06.797164 systemd-logind[1525]: Removed session 17. Mar 13 00:40:11.813823 systemd[1]: Started sshd@19-172.234.197.95:22-68.220.241.50:34766.service - OpenSSH per-connection server daemon (68.220.241.50:34766). Mar 13 00:40:11.963087 sshd[5828]: Accepted publickey for core from 68.220.241.50 port 34766 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:11.964502 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:11.968978 systemd-logind[1525]: New session 18 of user core. Mar 13 00:40:11.976437 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:40:12.079622 sshd[5831]: Connection closed by 68.220.241.50 port 34766 Mar 13 00:40:12.080475 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:12.085127 systemd[1]: sshd@19-172.234.197.95:22-68.220.241.50:34766.service: Deactivated successfully. Mar 13 00:40:12.087894 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:40:12.088949 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:40:12.090561 systemd-logind[1525]: Removed session 18. Mar 13 00:40:14.737112 update_engine[1527]: I20260313 00:40:14.737046 1527 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 13 00:40:14.737112 update_engine[1527]: I20260313 00:40:14.737097 1527 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 13 00:40:14.737709 update_engine[1527]: I20260313 00:40:14.737355 1527 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 13 00:40:14.737998 update_engine[1527]: I20260313 00:40:14.737968 1527 omaha_request_params.cc:62] Current group set to stable Mar 13 00:40:14.738228 update_engine[1527]: I20260313 00:40:14.738075 1527 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 13 00:40:14.738228 update_engine[1527]: I20260313 00:40:14.738088 1527 update_attempter.cc:643] Scheduling an action processor start. Mar 13 00:40:14.738228 update_engine[1527]: I20260313 00:40:14.738104 1527 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:40:14.738228 update_engine[1527]: I20260313 00:40:14.738130 1527 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 13 00:40:14.738228 update_engine[1527]: I20260313 00:40:14.738188 1527 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:40:14.738228 update_engine[1527]: I20260313 00:40:14.738198 1527 omaha_request_action.cc:272] Request: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: Mar 13 00:40:14.738228 update_engine[1527]: I20260313 00:40:14.738206 1527 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:40:14.742978 locksmithd[1567]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 13 00:40:14.744203 update_engine[1527]: I20260313 00:40:14.744175 1527 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:40:14.744838 update_engine[1527]: I20260313 00:40:14.744797 1527 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:40:14.766809 update_engine[1527]: E20260313 00:40:14.766762 1527 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:40:14.766872 update_engine[1527]: I20260313 00:40:14.766839 1527 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 13 00:40:15.384219 kubelet[2724]: E0313 00:40:15.384188 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:40:17.118565 systemd[1]: Started sshd@20-172.234.197.95:22-68.220.241.50:37574.service - OpenSSH per-connection server daemon (68.220.241.50:37574). Mar 13 00:40:17.271461 sshd[5843]: Accepted publickey for core from 68.220.241.50 port 37574 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:17.273294 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:17.278681 systemd-logind[1525]: New session 19 of user core. Mar 13 00:40:17.282403 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:40:17.410388 sshd[5846]: Connection closed by 68.220.241.50 port 37574 Mar 13 00:40:17.411478 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:17.416198 systemd[1]: sshd@20-172.234.197.95:22-68.220.241.50:37574.service: Deactivated successfully. Mar 13 00:40:17.418678 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:40:17.420188 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:40:17.422294 systemd-logind[1525]: Removed session 19. Mar 13 00:40:19.383862 kubelet[2724]: E0313 00:40:19.383830 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Mar 13 00:40:22.454934 systemd[1]: Started sshd@21-172.234.197.95:22-68.220.241.50:34108.service - OpenSSH per-connection server daemon (68.220.241.50:34108). Mar 13 00:40:22.640575 sshd[5884]: Accepted publickey for core from 68.220.241.50 port 34108 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:40:22.643822 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:22.649859 systemd-logind[1525]: New session 20 of user core. Mar 13 00:40:22.657392 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:40:22.791452 sshd[5887]: Connection closed by 68.220.241.50 port 34108 Mar 13 00:40:22.793576 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:22.798429 systemd[1]: sshd@21-172.234.197.95:22-68.220.241.50:34108.service: Deactivated successfully. Mar 13 00:40:22.800900 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:40:22.802073 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:40:22.803931 systemd-logind[1525]: Removed session 20. Mar 13 00:40:24.736347 update_engine[1527]: I20260313 00:40:24.736255 1527 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:40:24.736711 update_engine[1527]: I20260313 00:40:24.736377 1527 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:40:24.737177 update_engine[1527]: I20260313 00:40:24.737136 1527 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:40:24.740369 update_engine[1527]: E20260313 00:40:24.740309 1527 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:40:24.740422 update_engine[1527]: I20260313 00:40:24.740391 1527 libcurl_http_fetcher.cc:283] No HTTP response, retry 2