Jul 15 05:32:12.834129 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:32:12.834150 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:32:12.834159 kernel: BIOS-provided physical RAM map: Jul 15 05:32:12.834168 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jul 15 05:32:12.834173 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jul 15 05:32:12.834178 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 05:32:12.834185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 15 05:32:12.834190 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 15 05:32:12.834196 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 05:32:12.834201 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 05:32:12.834207 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:32:12.834213 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 05:32:12.834221 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jul 15 05:32:12.834226 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:32:12.834233 kernel: NX (Execute Disable) protection: active Jul 15 05:32:12.834239 kernel: APIC: Static calls initialized Jul 15 05:32:12.834245 kernel: SMBIOS 2.8 present. Jul 15 05:32:12.834253 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jul 15 05:32:12.834259 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:32:12.834264 kernel: Hypervisor detected: KVM Jul 15 05:32:12.834270 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:32:12.834276 kernel: kvm-clock: using sched offset of 5632120180 cycles Jul 15 05:32:12.834282 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:32:12.834289 kernel: tsc: Detected 2000.000 MHz processor Jul 15 05:32:12.834295 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:32:12.834302 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:32:12.834308 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jul 15 05:32:12.834316 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 05:32:12.834322 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:32:12.834328 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 15 05:32:12.834334 kernel: Using GB pages for direct mapping Jul 15 05:32:12.834340 kernel: ACPI: Early table checksum verification disabled Jul 15 05:32:12.834346 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jul 15 05:32:12.834352 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:32:12.834366 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:32:12.834373 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:32:12.834381 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 15 05:32:12.834387 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:32:12.834393 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:32:12.834399 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:32:12.834408 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:32:12.834414 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jul 15 05:32:12.834423 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jul 15 05:32:12.834429 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 15 05:32:12.834436 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jul 15 05:32:12.834442 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jul 15 05:32:12.834448 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jul 15 05:32:12.834454 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jul 15 05:32:12.834461 kernel: No NUMA configuration found Jul 15 05:32:12.834467 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jul 15 05:32:12.834475 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jul 15 05:32:12.834481 kernel: Zone ranges: Jul 15 05:32:12.834488 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:32:12.834494 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 05:32:12.834500 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jul 15 05:32:12.834506 kernel: Device empty Jul 15 05:32:12.834513 kernel: Movable zone start for each node Jul 15 05:32:12.834519 kernel: Early memory node ranges Jul 15 05:32:12.834525 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 05:32:12.834531 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 15 05:32:12.834540 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jul 15 05:32:12.834546 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jul 15 05:32:12.834552 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:32:12.834558 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 05:32:12.834565 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 15 05:32:12.834571 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 05:32:12.834577 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:32:12.834584 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:32:12.834590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 05:32:12.834599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:32:12.834605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:32:12.834611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:32:12.834618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:32:12.834624 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:32:12.834630 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:32:12.834636 kernel: TSC deadline timer available Jul 15 05:32:12.834643 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:32:12.834649 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:32:12.834657 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:32:12.834663 kernel: CPU topo: Max. threads per core: 1 Jul 15 05:32:12.834669 kernel: CPU topo: Num. cores per package: 2 Jul 15 05:32:12.834676 kernel: CPU topo: Num. threads per package: 2 Jul 15 05:32:12.834682 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 05:32:12.834688 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:32:12.834694 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 05:32:12.834701 kernel: kvm-guest: setup PV sched yield Jul 15 05:32:12.834707 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 05:32:12.834715 kernel: Booting paravirtualized kernel on KVM Jul 15 05:32:12.834721 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:32:12.834728 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 05:32:12.834734 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 05:32:12.834740 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 05:32:12.834747 kernel: pcpu-alloc: [0] 0 1 Jul 15 05:32:12.834753 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:32:12.834759 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:32:12.834766 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:32:12.834775 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:32:12.834781 kernel: random: crng init done Jul 15 05:32:12.834787 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:32:12.834794 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:32:12.834800 kernel: Fallback order for Node 0: 0 Jul 15 05:32:12.834806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 15 05:32:12.834813 kernel: Policy zone: Normal Jul 15 05:32:12.834819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:32:12.834825 kernel: software IO TLB: area num 2. Jul 15 05:32:12.834833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 05:32:12.834840 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:32:12.834846 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:32:12.834852 kernel: Dynamic Preempt: voluntary Jul 15 05:32:12.834858 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:32:12.834865 kernel: rcu: RCU event tracing is enabled. Jul 15 05:32:12.834872 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 05:32:12.834879 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:32:12.834885 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:32:12.834893 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:32:12.834899 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:32:12.834906 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 05:32:12.834912 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:32:12.834924 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:32:12.834933 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:32:12.834939 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 05:32:12.834946 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:32:12.834953 kernel: Console: colour VGA+ 80x25 Jul 15 05:32:12.834959 kernel: printk: legacy console [tty0] enabled Jul 15 05:32:12.834966 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:32:12.834974 kernel: ACPI: Core revision 20240827 Jul 15 05:32:12.834981 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 05:32:12.834988 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:32:12.834994 kernel: x2apic enabled Jul 15 05:32:12.835001 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:32:12.835010 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 05:32:12.835016 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 05:32:12.835023 kernel: kvm-guest: setup PV IPIs Jul 15 05:32:12.835030 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 05:32:12.835036 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jul 15 05:32:12.835043 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jul 15 05:32:12.835050 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:32:12.835056 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 05:32:12.835063 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 05:32:12.835088 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:32:12.835108 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:32:12.836113 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:32:12.836126 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 15 05:32:12.836134 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 05:32:12.836141 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 05:32:12.836148 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 05:32:12.836156 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 05:32:12.836167 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 05:32:12.836174 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:32:12.836180 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:32:12.836187 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:32:12.836194 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 15 05:32:12.836201 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:32:12.836207 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jul 15 05:32:12.836214 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jul 15 05:32:12.836221 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:32:12.836229 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:32:12.836236 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:32:12.836243 kernel: landlock: Up and running. Jul 15 05:32:12.836249 kernel: SELinux: Initializing. Jul 15 05:32:12.836256 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:32:12.836263 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:32:12.836270 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jul 15 05:32:12.836276 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 05:32:12.836283 kernel: ... version: 0 Jul 15 05:32:12.836291 kernel: ... bit width: 48 Jul 15 05:32:12.836298 kernel: ... generic registers: 6 Jul 15 05:32:12.836304 kernel: ... value mask: 0000ffffffffffff Jul 15 05:32:12.836311 kernel: ... max period: 00007fffffffffff Jul 15 05:32:12.836318 kernel: ... fixed-purpose events: 0 Jul 15 05:32:12.836324 kernel: ... event mask: 000000000000003f Jul 15 05:32:12.836331 kernel: signal: max sigframe size: 3376 Jul 15 05:32:12.836338 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:32:12.836345 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:32:12.836352 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:32:12.836360 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:32:12.836367 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:32:12.836373 kernel: .... node #0, CPUs: #1 Jul 15 05:32:12.836380 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 05:32:12.836387 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jul 15 05:32:12.836393 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 227288K reserved, 0K cma-reserved) Jul 15 05:32:12.836400 kernel: devtmpfs: initialized Jul 15 05:32:12.836407 kernel: x86/mm: Memory block size: 128MB Jul 15 05:32:12.836413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:32:12.836422 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 05:32:12.836429 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:32:12.836435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:32:12.836442 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:32:12.836449 kernel: audit: type=2000 audit(1752557529.857:1): state=initialized audit_enabled=0 res=1 Jul 15 05:32:12.836455 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:32:12.836462 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:32:12.836468 kernel: cpuidle: using governor menu Jul 15 05:32:12.836477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:32:12.836484 kernel: dca service started, version 1.12.1 Jul 15 05:32:12.836490 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 05:32:12.836497 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 05:32:12.836504 kernel: PCI: Using configuration type 1 for base access Jul 15 05:32:12.836510 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:32:12.836517 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:32:12.836524 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:32:12.836530 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:32:12.836539 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:32:12.836545 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:32:12.836552 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:32:12.836558 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:32:12.836565 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:32:12.836572 kernel: ACPI: Interpreter enabled Jul 15 05:32:12.836579 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 05:32:12.836586 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:32:12.836593 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:32:12.836601 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:32:12.836608 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 05:32:12.836614 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:32:12.836794 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:32:12.836910 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 05:32:12.837018 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 05:32:12.837028 kernel: PCI host bridge to bus 0000:00 Jul 15 05:32:12.837164 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:32:12.837272 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:32:12.837371 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:32:12.837467 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 15 05:32:12.837563 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 05:32:12.837658 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jul 15 05:32:12.837754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:32:12.837889 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:32:12.838011 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:32:12.840512 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 05:32:12.840631 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 05:32:12.840740 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 05:32:12.840846 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:32:12.840964 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 15 05:32:12.841100 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jul 15 05:32:12.841214 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 05:32:12.841321 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 05:32:12.841438 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 05:32:12.841546 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 15 05:32:12.841652 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 05:32:12.841759 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 05:32:12.841871 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 05:32:12.841988 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:32:12.845061 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 05:32:12.845213 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 05:32:12.845323 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jul 15 05:32:12.845429 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jul 15 05:32:12.845553 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 05:32:12.845660 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 05:32:12.845669 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:32:12.845677 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:32:12.845684 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:32:12.845691 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:32:12.845697 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 05:32:12.845704 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 05:32:12.845713 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 05:32:12.845720 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 05:32:12.845727 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 05:32:12.845734 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 05:32:12.845741 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 05:32:12.845747 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 05:32:12.845754 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 05:32:12.845761 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 05:32:12.845767 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 05:32:12.845776 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 05:32:12.845782 kernel: iommu: Default domain type: Translated Jul 15 05:32:12.845789 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:32:12.845796 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:32:12.845803 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:32:12.845810 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jul 15 05:32:12.845816 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 15 05:32:12.845921 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 05:32:12.846029 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 05:32:12.846171 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:32:12.846182 kernel: vgaarb: loaded Jul 15 05:32:12.846190 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 05:32:12.846197 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 05:32:12.846204 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:32:12.846211 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:32:12.846218 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:32:12.846225 kernel: pnp: PnP ACPI init Jul 15 05:32:12.846353 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 05:32:12.846364 kernel: pnp: PnP ACPI: found 5 devices Jul 15 05:32:12.846372 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:32:12.846378 kernel: NET: Registered PF_INET protocol family Jul 15 05:32:12.846386 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:32:12.846393 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 05:32:12.846400 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:32:12.846407 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:32:12.846416 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 05:32:12.846423 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 05:32:12.846430 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:32:12.846437 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:32:12.846444 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:32:12.846451 kernel: NET: Registered PF_XDP protocol family Jul 15 05:32:12.846550 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:32:12.846654 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:32:12.846752 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:32:12.846853 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 15 05:32:12.846951 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 05:32:12.847048 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jul 15 05:32:12.847057 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:32:12.847064 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 05:32:12.847126 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jul 15 05:32:12.847135 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jul 15 05:32:12.847141 kernel: Initialise system trusted keyrings Jul 15 05:32:12.847152 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 05:32:12.847159 kernel: Key type asymmetric registered Jul 15 05:32:12.847166 kernel: Asymmetric key parser 'x509' registered Jul 15 05:32:12.847173 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:32:12.847179 kernel: io scheduler mq-deadline registered Jul 15 05:32:12.847186 kernel: io scheduler kyber registered Jul 15 05:32:12.847193 kernel: io scheduler bfq registered Jul 15 05:32:12.847200 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:32:12.847207 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 05:32:12.848213 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 05:32:12.848231 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:32:12.848238 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:32:12.848246 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:32:12.848252 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:32:12.848259 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:32:12.848266 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:32:12.848397 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 15 05:32:12.848501 kernel: rtc_cmos 00:03: registered as rtc0 Jul 15 05:32:12.848606 kernel: rtc_cmos 00:03: setting system clock to 2025-07-15T05:32:12 UTC (1752557532) Jul 15 05:32:12.848706 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 05:32:12.848715 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 05:32:12.848722 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:32:12.848729 kernel: Segment Routing with IPv6 Jul 15 05:32:12.848735 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:32:12.848742 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:32:12.848749 kernel: Key type dns_resolver registered Jul 15 05:32:12.848758 kernel: IPI shorthand broadcast: enabled Jul 15 05:32:12.848766 kernel: sched_clock: Marking stable (2485004530, 212918390)->(2771219390, -73296470) Jul 15 05:32:12.848772 kernel: registered taskstats version 1 Jul 15 05:32:12.848779 kernel: Loading compiled-in X.509 certificates Jul 15 05:32:12.848786 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:32:12.848793 kernel: Demotion targets for Node 0: null Jul 15 05:32:12.848799 kernel: Key type .fscrypt registered Jul 15 05:32:12.848805 kernel: Key type fscrypt-provisioning registered Jul 15 05:32:12.848812 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:32:12.848821 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:32:12.848828 kernel: ima: No architecture policies found Jul 15 05:32:12.848834 kernel: clk: Disabling unused clocks Jul 15 05:32:12.848841 kernel: Warning: unable to open an initial console. Jul 15 05:32:12.848848 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:32:12.848854 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:32:12.848862 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:32:12.848868 kernel: Run /init as init process Jul 15 05:32:12.848875 kernel: with arguments: Jul 15 05:32:12.848884 kernel: /init Jul 15 05:32:12.848890 kernel: with environment: Jul 15 05:32:12.848897 kernel: HOME=/ Jul 15 05:32:12.848904 kernel: TERM=linux Jul 15 05:32:12.848911 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:32:12.848934 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:32:12.848946 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:32:12.848954 systemd[1]: Detected virtualization kvm. Jul 15 05:32:12.848963 systemd[1]: Detected architecture x86-64. Jul 15 05:32:12.848971 systemd[1]: Running in initrd. Jul 15 05:32:12.848978 systemd[1]: No hostname configured, using default hostname. Jul 15 05:32:12.848985 systemd[1]: Hostname set to . Jul 15 05:32:12.848993 systemd[1]: Initializing machine ID from random generator. Jul 15 05:32:12.849000 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:32:12.849008 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:32:12.849015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:32:12.849025 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:32:12.849033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:32:12.849040 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:32:12.849049 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:32:12.849057 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:32:12.849065 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:32:12.849094 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:32:12.849101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:32:12.849109 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:32:12.849116 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:32:12.849123 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:32:12.849131 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:32:12.849138 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:32:12.849146 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:32:12.849153 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:32:12.849162 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:32:12.849170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:32:12.849179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:32:12.849186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:32:12.849194 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:32:12.849201 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:32:12.849211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:32:12.849219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:32:12.849226 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:32:12.849234 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:32:12.849241 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:32:12.849249 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:32:12.849256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:32:12.849284 systemd-journald[205]: Collecting audit messages is disabled. Jul 15 05:32:12.849304 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:32:12.849314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:32:12.849322 systemd-journald[205]: Journal started Jul 15 05:32:12.849339 systemd-journald[205]: Runtime Journal (/run/log/journal/cb29cae4e1204320a1060184c1a4e117) is 8M, max 78.5M, 70.5M free. Jul 15 05:32:12.832019 systemd-modules-load[206]: Inserted module 'overlay' Jul 15 05:32:12.852482 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:32:12.856360 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:32:12.857213 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:32:12.860135 kernel: Bridge firewalling registered Jul 15 05:32:12.858790 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 15 05:32:12.860053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:32:12.885435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:32:12.933388 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:32:12.935102 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:32:12.936680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:32:12.940198 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:32:12.946321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:32:12.954790 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:32:12.954824 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:32:12.957259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:32:12.961144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:32:12.966429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:32:12.972992 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:32:12.978330 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:32:12.979572 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:32:12.999799 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:32:13.008206 systemd-resolved[238]: Positive Trust Anchors: Jul 15 05:32:13.008219 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:32:13.008241 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:32:13.013257 systemd-resolved[238]: Defaulting to hostname 'linux'. Jul 15 05:32:13.014119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:32:13.014830 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:32:13.093108 kernel: SCSI subsystem initialized Jul 15 05:32:13.102147 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:32:13.110100 kernel: iscsi: registered transport (tcp) Jul 15 05:32:13.127357 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:32:13.127393 kernel: QLogic iSCSI HBA Driver Jul 15 05:32:13.147065 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:32:13.162840 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:32:13.165626 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:32:13.220292 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:32:13.222037 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:32:13.284109 kernel: raid6: avx2x4 gen() 28435 MB/s Jul 15 05:32:13.301103 kernel: raid6: avx2x2 gen() 28051 MB/s Jul 15 05:32:13.318289 kernel: raid6: avx2x1 gen() 18927 MB/s Jul 15 05:32:13.318311 kernel: raid6: using algorithm avx2x4 gen() 28435 MB/s Jul 15 05:32:13.338109 kernel: raid6: .... xor() 2964 MB/s, rmw enabled Jul 15 05:32:13.338144 kernel: raid6: using avx2x2 recovery algorithm Jul 15 05:32:13.354106 kernel: xor: automatically using best checksumming function avx Jul 15 05:32:13.483112 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:32:13.491124 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:32:13.494065 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:32:13.515431 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 15 05:32:13.519657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:32:13.522512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:32:13.547030 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jul 15 05:32:13.577180 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:32:13.579036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:32:13.646051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:32:13.650734 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:32:13.705102 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:32:13.712098 kernel: AES CTR mode by8 optimization enabled Jul 15 05:32:13.734122 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jul 15 05:32:13.751756 kernel: scsi host0: Virtio SCSI HBA Jul 15 05:32:13.755523 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 15 05:32:13.754357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:32:13.754465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:32:13.758384 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:32:13.762341 kernel: libata version 3.00 loaded. Jul 15 05:32:13.762763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:32:13.776112 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 05:32:13.776259 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 05:32:13.786841 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 05:32:13.786976 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 05:32:13.787310 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 05:32:13.879562 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 15 05:32:13.881121 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jul 15 05:32:13.881315 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 15 05:32:13.881463 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 15 05:32:13.881680 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 05:32:13.881818 kernel: scsi host1: ahci Jul 15 05:32:13.902573 kernel: scsi host2: ahci Jul 15 05:32:13.904313 kernel: scsi host3: ahci Jul 15 05:32:13.904479 kernel: scsi host4: ahci Jul 15 05:32:13.905104 kernel: scsi host5: ahci Jul 15 05:32:13.907108 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:32:13.907127 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:32:13.907136 kernel: GPT:9289727 != 167739391 Jul 15 05:32:13.907144 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:32:13.907152 kernel: GPT:9289727 != 167739391 Jul 15 05:32:13.907159 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:32:13.907167 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:32:13.908106 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 15 05:32:13.910167 kernel: scsi host6: ahci Jul 15 05:32:13.910345 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 0 Jul 15 05:32:13.910357 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 0 Jul 15 05:32:13.910366 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 0 Jul 15 05:32:13.910375 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 0 Jul 15 05:32:13.910384 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 0 Jul 15 05:32:13.910393 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 0 Jul 15 05:32:13.970051 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 15 05:32:14.008108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:32:14.046012 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 15 05:32:14.059195 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 15 05:32:14.059830 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 15 05:32:14.068718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 15 05:32:14.071215 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:32:14.084403 disk-uuid[623]: Primary Header is updated. Jul 15 05:32:14.084403 disk-uuid[623]: Secondary Entries is updated. Jul 15 05:32:14.084403 disk-uuid[623]: Secondary Header is updated. Jul 15 05:32:14.091111 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:32:14.115173 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:32:14.223289 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 05:32:14.223322 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 05:32:14.223334 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 05:32:14.223344 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 05:32:14.223353 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 05:32:14.225289 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 15 05:32:14.259263 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:32:14.281100 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:32:14.281727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:32:14.282890 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:32:14.284959 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:32:14.302927 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:32:15.105548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:32:15.107541 disk-uuid[624]: The operation has completed successfully. Jul 15 05:32:15.162674 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:32:15.162831 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:32:15.184008 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:32:15.199173 sh[652]: Success Jul 15 05:32:15.215269 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:32:15.215336 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:32:15.216173 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:32:15.225109 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:32:15.264495 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:32:15.269149 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:32:15.282872 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:32:15.292615 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:32:15.292674 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (664) Jul 15 05:32:15.295101 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:32:15.297570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:32:15.299368 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:32:15.307859 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:32:15.308886 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:32:15.310040 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:32:15.310815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:32:15.313292 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:32:15.338872 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (697) Jul 15 05:32:15.338929 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:32:15.340429 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:32:15.342602 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:32:15.352139 kernel: BTRFS info (device sda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:32:15.352937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:32:15.354685 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:32:15.433781 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:32:15.437175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:32:15.463911 ignition[750]: Ignition 2.21.0 Jul 15 05:32:15.463930 ignition[750]: Stage: fetch-offline Jul 15 05:32:15.466492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:32:15.463964 ignition[750]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:32:15.463975 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:32:15.464057 ignition[750]: parsed url from cmdline: "" Jul 15 05:32:15.464061 ignition[750]: no config URL provided Jul 15 05:32:15.464065 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:32:15.464087 ignition[750]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:32:15.464092 ignition[750]: failed to fetch config: resource requires networking Jul 15 05:32:15.464232 ignition[750]: Ignition finished successfully Jul 15 05:32:15.472270 systemd-networkd[837]: lo: Link UP Jul 15 05:32:15.472282 systemd-networkd[837]: lo: Gained carrier Jul 15 05:32:15.473759 systemd-networkd[837]: Enumeration completed Jul 15 05:32:15.473868 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:32:15.474520 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:32:15.474525 systemd-networkd[837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:32:15.475458 systemd[1]: Reached target network.target - Network. Jul 15 05:32:15.477117 systemd-networkd[837]: eth0: Link UP Jul 15 05:32:15.477122 systemd-networkd[837]: eth0: Gained carrier Jul 15 05:32:15.477131 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:32:15.478320 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 05:32:15.512128 ignition[843]: Ignition 2.21.0 Jul 15 05:32:15.512144 ignition[843]: Stage: fetch Jul 15 05:32:15.512274 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:32:15.512287 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:32:15.512379 ignition[843]: parsed url from cmdline: "" Jul 15 05:32:15.512384 ignition[843]: no config URL provided Jul 15 05:32:15.512389 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:32:15.512398 ignition[843]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:32:15.512436 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 Jul 15 05:32:15.512810 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 15 05:32:15.712959 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 Jul 15 05:32:15.713116 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 15 05:32:15.927178 systemd-networkd[837]: eth0: DHCPv4 address 172.237.155.110/24, gateway 172.237.155.1 acquired from 23.194.118.55 Jul 15 05:32:16.114161 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 Jul 15 05:32:16.210001 ignition[843]: PUT result: OK Jul 15 05:32:16.210174 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 Jul 15 05:32:16.316624 ignition[843]: GET result: OK Jul 15 05:32:16.316825 ignition[843]: parsing config with SHA512: 2aaae39d941a92202db312d2f3dd665a01c2ea3879560f29b5590cbf04b236237c969fe962403a52169b5844a21070e47297cd1f6c6a234a05b6a4dc115f2e34 Jul 15 05:32:16.320210 unknown[843]: fetched base config from "system" Jul 15 05:32:16.320437 ignition[843]: fetch: fetch complete Jul 15 05:32:16.320223 unknown[843]: fetched base config from "system" Jul 15 05:32:16.320442 ignition[843]: fetch: fetch passed Jul 15 05:32:16.320229 unknown[843]: fetched user config from "akamai" Jul 15 05:32:16.320485 ignition[843]: Ignition finished successfully Jul 15 05:32:16.324264 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 05:32:16.326055 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:32:16.355128 ignition[851]: Ignition 2.21.0 Jul 15 05:32:16.355143 ignition[851]: Stage: kargs Jul 15 05:32:16.360706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:32:16.355248 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:32:16.355258 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:32:16.355817 ignition[851]: kargs: kargs passed Jul 15 05:32:16.355860 ignition[851]: Ignition finished successfully Jul 15 05:32:16.381011 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:32:16.400368 ignition[857]: Ignition 2.21.0 Jul 15 05:32:16.400378 ignition[857]: Stage: disks Jul 15 05:32:16.400495 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:32:16.400504 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:32:16.403252 ignition[857]: disks: disks passed Jul 15 05:32:16.403846 ignition[857]: Ignition finished successfully Jul 15 05:32:16.405712 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:32:16.406979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:32:16.407562 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:32:16.408745 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:32:16.409953 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:32:16.411009 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:32:16.413046 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:32:16.437551 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:32:16.439518 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:32:16.441367 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:32:16.546102 kernel: EXT4-fs (sda9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:32:16.546623 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:32:16.547670 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:32:16.550048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:32:16.553145 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:32:16.554433 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:32:16.555719 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:32:16.555748 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:32:16.560551 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:32:16.562913 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:32:16.571310 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Jul 15 05:32:16.571337 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:32:16.573895 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:32:16.576750 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:32:16.580344 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:32:16.611758 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:32:16.616941 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:32:16.620111 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:32:16.623872 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:32:16.705551 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:32:16.707497 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:32:16.709236 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:32:16.724478 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:32:16.727120 kernel: BTRFS info (device sda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:32:16.744148 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:32:16.750561 ignition[988]: INFO : Ignition 2.21.0 Jul 15 05:32:16.750561 ignition[988]: INFO : Stage: mount Jul 15 05:32:16.750561 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:32:16.750561 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:32:16.753273 ignition[988]: INFO : mount: mount passed Jul 15 05:32:16.753273 ignition[988]: INFO : Ignition finished successfully Jul 15 05:32:16.752517 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:32:16.755012 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:32:17.019253 systemd-networkd[837]: eth0: Gained IPv6LL Jul 15 05:32:17.548371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:32:17.581101 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1000) Jul 15 05:32:17.581136 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:32:17.583405 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:32:17.585185 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:32:17.590405 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:32:17.617451 ignition[1016]: INFO : Ignition 2.21.0 Jul 15 05:32:17.617451 ignition[1016]: INFO : Stage: files Jul 15 05:32:17.618700 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:32:17.618700 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:32:17.618700 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:32:17.620820 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:32:17.620820 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:32:17.624635 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:32:17.625466 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:32:17.625466 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:32:17.625070 unknown[1016]: wrote ssh authorized keys file for user: core Jul 15 05:32:17.627658 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 05:32:17.627658 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 15 05:32:17.822523 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:32:18.097105 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:32:18.098164 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:32:18.104796 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:32:18.104796 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:32:18.104796 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:32:18.104796 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:32:18.104796 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:32:18.104796 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 15 05:32:18.521150 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:32:18.735451 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:32:18.735451 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:32:18.737903 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:32:18.738887 ignition[1016]: INFO : files: files passed Jul 15 05:32:18.738887 ignition[1016]: INFO : Ignition finished successfully Jul 15 05:32:18.741350 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:32:18.746219 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:32:18.750216 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:32:18.759547 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:32:18.759681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:32:18.765925 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:32:18.765925 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:32:18.768283 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:32:18.770849 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:32:18.772011 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:32:18.773765 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:32:18.823388 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:32:18.823520 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:32:18.824868 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:32:18.825892 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:32:18.827129 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:32:18.827855 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:32:18.860584 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:32:18.863529 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:32:18.888762 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:32:18.889446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:32:18.890753 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:32:18.892022 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:32:18.892150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:32:18.893492 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:32:18.894301 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:32:18.895484 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:32:18.896648 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:32:18.897696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:32:18.899047 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:32:18.900319 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:32:18.901520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:32:18.902774 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:32:18.904058 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:32:18.905334 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:32:18.906471 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:32:18.906566 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:32:18.907957 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:32:18.908805 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:32:18.909874 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:32:18.910209 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:32:18.911176 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:32:18.911306 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:32:18.912981 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:32:18.913176 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:32:18.914439 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:32:18.914538 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:32:18.917162 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:32:18.920249 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:32:18.920813 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:32:18.920929 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:32:18.922253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:32:18.922385 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:32:18.929684 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:32:18.929786 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:32:18.944882 ignition[1071]: INFO : Ignition 2.21.0 Jul 15 05:32:18.944882 ignition[1071]: INFO : Stage: umount Jul 15 05:32:18.947666 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:32:18.947666 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:32:18.947666 ignition[1071]: INFO : umount: umount passed Jul 15 05:32:18.947666 ignition[1071]: INFO : Ignition finished successfully Jul 15 05:32:18.947408 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:32:18.947519 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:32:18.948647 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:32:18.948726 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:32:18.949768 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:32:18.949815 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:32:18.952389 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 05:32:18.952438 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 05:32:18.953816 systemd[1]: Stopped target network.target - Network. Jul 15 05:32:18.955433 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:32:18.955484 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:32:18.981808 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:32:18.982288 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:32:18.983211 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:32:18.984649 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:32:18.985154 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:32:18.985717 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:32:18.985763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:32:18.987027 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:32:18.987067 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:32:18.988108 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:32:18.988160 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:32:18.989208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:32:18.989254 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:32:18.990443 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:32:18.991513 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:32:18.996023 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:32:18.996895 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:32:18.997005 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:32:18.998414 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:32:18.998532 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:32:19.002225 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:32:19.002465 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:32:19.002586 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:32:19.004952 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:32:19.006673 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:32:19.007343 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:32:19.007383 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:32:19.008388 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:32:19.008441 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:32:19.010324 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:32:19.012395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:32:19.012449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:32:19.013647 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:32:19.013706 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:32:19.015352 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:32:19.015402 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:32:19.017061 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:32:19.017133 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:32:19.019024 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:32:19.023348 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:32:19.023413 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:32:19.038531 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:32:19.038667 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:32:19.043463 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:32:19.043650 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:32:19.045241 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:32:19.045311 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:32:19.046275 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:32:19.046316 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:32:19.047451 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:32:19.047504 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:32:19.049245 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:32:19.049293 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:32:19.050450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:32:19.050496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:32:19.053181 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:32:19.054200 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:32:19.054257 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:32:19.057512 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:32:19.057564 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:32:19.059033 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 05:32:19.059103 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:32:19.060169 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:32:19.060216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:32:19.061281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:32:19.061327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:32:19.064276 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:32:19.064336 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 05:32:19.064391 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:32:19.064441 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:32:19.071403 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:32:19.071527 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:32:19.073053 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:32:19.074982 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:32:19.087215 systemd[1]: Switching root. Jul 15 05:32:19.120183 systemd-journald[205]: Journal stopped Jul 15 05:32:20.092242 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 15 05:32:20.092272 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:32:20.092284 kernel: SELinux: policy capability open_perms=1 Jul 15 05:32:20.092297 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:32:20.092306 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:32:20.092315 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:32:20.092325 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:32:20.092334 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:32:20.092343 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:32:20.092352 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:32:20.092365 kernel: audit: type=1403 audit(1752557539.278:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:32:20.092375 systemd[1]: Successfully loaded SELinux policy in 85.635ms. Jul 15 05:32:20.092386 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.968ms. Jul 15 05:32:20.092397 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:32:20.092408 systemd[1]: Detected virtualization kvm. Jul 15 05:32:20.092420 systemd[1]: Detected architecture x86-64. Jul 15 05:32:20.092430 systemd[1]: Detected first boot. Jul 15 05:32:20.092441 systemd[1]: Initializing machine ID from random generator. Jul 15 05:32:20.092452 zram_generator::config[1119]: No configuration found. Jul 15 05:32:20.092464 kernel: Guest personality initialized and is inactive Jul 15 05:32:20.092474 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:32:20.092483 kernel: Initialized host personality Jul 15 05:32:20.092495 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:32:20.092506 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:32:20.092516 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:32:20.092526 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:32:20.092536 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:32:20.092546 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:32:20.092556 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:32:20.092569 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:32:20.092580 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:32:20.092590 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:32:20.092600 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:32:20.092610 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:32:20.092620 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:32:20.092631 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:32:20.092644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:32:20.092654 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:32:20.092664 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:32:20.092675 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:32:20.092688 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:32:20.092699 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:32:20.092710 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:32:20.092720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:32:20.092733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:32:20.092743 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:32:20.092753 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:32:20.092765 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:32:20.092775 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:32:20.092785 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:32:20.092795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:32:20.092806 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:32:20.092818 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:32:20.092829 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:32:20.092840 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:32:20.092850 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:32:20.092861 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:32:20.092874 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:32:20.092884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:32:20.092895 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:32:20.092905 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:32:20.092915 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:32:20.092925 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:32:20.092936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:32:20.092946 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:32:20.092961 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:32:20.092971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:32:20.092982 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:32:20.092992 systemd[1]: Reached target machines.target - Containers. Jul 15 05:32:20.093002 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:32:20.093012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:32:20.093022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:32:20.093033 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:32:20.093046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:32:20.093056 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:32:20.093067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:32:20.097371 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:32:20.097386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:32:20.097397 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:32:20.097407 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:32:20.097417 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:32:20.097428 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:32:20.097443 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:32:20.097454 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:32:20.097465 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:32:20.097475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:32:20.097485 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:32:20.097496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:32:20.097506 kernel: loop: module loaded Jul 15 05:32:20.097517 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:32:20.097530 kernel: fuse: init (API version 7.41) Jul 15 05:32:20.097540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:32:20.097551 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:32:20.097561 systemd[1]: Stopped verity-setup.service. Jul 15 05:32:20.097572 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:32:20.097582 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:32:20.097593 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:32:20.097603 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:32:20.097617 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:32:20.097627 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:32:20.097637 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:32:20.097647 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:32:20.097658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:32:20.097668 kernel: ACPI: bus type drm_connector registered Jul 15 05:32:20.097678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:32:20.097688 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:32:20.097699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:32:20.097711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:32:20.097721 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:32:20.097754 systemd-journald[1206]: Collecting audit messages is disabled. Jul 15 05:32:20.097776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:32:20.097790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:32:20.097801 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:32:20.097811 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:32:20.097822 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:32:20.097832 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:32:20.097843 systemd-journald[1206]: Journal started Jul 15 05:32:20.097866 systemd-journald[1206]: Runtime Journal (/run/log/journal/be580124e37c4f7fbb114c17292624ca) is 8M, max 78.5M, 70.5M free. Jul 15 05:32:19.770344 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:32:19.779893 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 05:32:19.780453 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:32:20.102168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:32:20.102195 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:32:20.104365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:32:20.105876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:32:20.106862 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:32:20.107906 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:32:20.125684 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:32:20.129157 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:32:20.130747 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:32:20.132181 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:32:20.132261 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:32:20.134959 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:32:20.139619 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:32:20.140347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:32:20.144196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:32:20.149475 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:32:20.150288 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:32:20.155637 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:32:20.156289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:32:20.157258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:32:20.160729 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:32:20.167501 systemd-journald[1206]: Time spent on flushing to /var/log/journal/be580124e37c4f7fbb114c17292624ca is 29.571ms for 995 entries. Jul 15 05:32:20.167501 systemd-journald[1206]: System Journal (/var/log/journal/be580124e37c4f7fbb114c17292624ca) is 8M, max 195.6M, 187.6M free. Jul 15 05:32:20.217804 systemd-journald[1206]: Received client request to flush runtime journal. Jul 15 05:32:20.217855 kernel: loop0: detected capacity change from 0 to 114000 Jul 15 05:32:20.164233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:32:20.169335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:32:20.171117 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:32:20.172441 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:32:20.192290 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:32:20.193048 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:32:20.199020 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:32:20.229897 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:32:20.231582 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:32:20.243391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:32:20.248038 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 15 05:32:20.248058 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 15 05:32:20.254319 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:32:20.255575 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:32:20.256460 kernel: loop1: detected capacity change from 0 to 8 Jul 15 05:32:20.261885 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:32:20.278239 kernel: loop2: detected capacity change from 0 to 229808 Jul 15 05:32:20.318170 kernel: loop3: detected capacity change from 0 to 146488 Jul 15 05:32:20.317713 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:32:20.323185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:32:20.349536 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 15 05:32:20.349763 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 15 05:32:20.353395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:32:20.367100 kernel: loop4: detected capacity change from 0 to 114000 Jul 15 05:32:20.383108 kernel: loop5: detected capacity change from 0 to 8 Jul 15 05:32:20.386103 kernel: loop6: detected capacity change from 0 to 229808 Jul 15 05:32:20.406099 kernel: loop7: detected capacity change from 0 to 146488 Jul 15 05:32:20.422018 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jul 15 05:32:20.422779 (sd-merge)[1267]: Merged extensions into '/usr'. Jul 15 05:32:20.426890 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:32:20.426973 systemd[1]: Reloading... Jul 15 05:32:20.512114 zram_generator::config[1290]: No configuration found. Jul 15 05:32:20.608587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:32:20.683604 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:32:20.684474 systemd[1]: Reloading finished in 257 ms. Jul 15 05:32:20.697100 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:32:20.704285 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:32:20.707497 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:32:20.717188 systemd[1]: Starting ensure-sysext.service... Jul 15 05:32:20.719339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:32:20.746021 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:32:20.746108 systemd[1]: Reloading... Jul 15 05:32:20.763291 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:32:20.763332 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:32:20.763572 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:32:20.763779 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:32:20.764951 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:32:20.765230 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 15 05:32:20.765339 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jul 15 05:32:20.770019 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:32:20.770141 systemd-tmpfiles[1337]: Skipping /boot Jul 15 05:32:20.786018 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:32:20.787162 systemd-tmpfiles[1337]: Skipping /boot Jul 15 05:32:20.824110 zram_generator::config[1364]: No configuration found. Jul 15 05:32:20.912872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:32:20.970455 systemd[1]: Reloading finished in 223 ms. Jul 15 05:32:20.981984 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:32:20.995185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:32:21.003050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:32:21.005247 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:32:21.017758 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:32:21.021273 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:32:21.024797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:32:21.028176 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:32:21.034334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:32:21.034512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:32:21.035656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:32:21.038198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:32:21.042230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:32:21.043010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:32:21.043197 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:32:21.046341 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:32:21.047478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:32:21.051792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:32:21.052500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:32:21.056317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:32:21.058238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:32:21.058326 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:32:21.058450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:32:21.059220 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:32:21.064963 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:32:21.077260 systemd[1]: Finished ensure-sysext.service. Jul 15 05:32:21.078187 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:32:21.089249 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 05:32:21.093182 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Jul 15 05:32:21.097027 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:32:21.097257 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:32:21.103744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:32:21.103996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:32:21.104699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:32:21.115798 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:32:21.118108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:32:21.121683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:32:21.122256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:32:21.122969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:32:21.128916 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:32:21.147944 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:32:21.151537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:32:21.155292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:32:21.161580 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:32:21.162765 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:32:21.173582 augenrules[1466]: No rules Jul 15 05:32:21.175492 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:32:21.175780 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:32:21.249451 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:32:21.340098 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:32:21.349102 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:32:21.385249 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:32:21.387916 systemd-networkd[1457]: lo: Link UP Jul 15 05:32:21.387930 systemd-networkd[1457]: lo: Gained carrier Jul 15 05:32:21.389587 systemd-networkd[1457]: Enumeration completed Jul 15 05:32:21.389697 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:32:21.389990 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:32:21.389995 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:32:21.390690 systemd-networkd[1457]: eth0: Link UP Jul 15 05:32:21.390845 systemd-networkd[1457]: eth0: Gained carrier Jul 15 05:32:21.390858 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:32:21.392596 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:32:21.395828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:32:21.432016 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:32:21.438663 systemd-resolved[1412]: Positive Trust Anchors: Jul 15 05:32:21.438952 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:32:21.439018 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:32:21.443775 systemd-resolved[1412]: Defaulting to hostname 'linux'. Jul 15 05:32:21.447997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:32:21.448730 systemd[1]: Reached target network.target - Network. Jul 15 05:32:21.449429 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:32:21.457112 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 05:32:21.457361 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 05:32:21.464280 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 05:32:21.464921 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:32:21.465718 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:32:21.467423 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:32:21.467991 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:32:21.468553 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:32:21.469149 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:32:21.469187 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:32:21.469936 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:32:21.470684 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:32:21.498806 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:32:21.499541 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:32:21.502169 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:32:21.505890 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:32:21.511605 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:32:21.513414 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:32:21.514499 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:32:21.522989 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:32:21.524498 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:32:21.528886 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:32:21.542686 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 15 05:32:21.544168 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:32:21.545310 kernel: EDAC MC: Ver: 3.0.0 Jul 15 05:32:21.544814 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:32:21.545481 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:32:21.545567 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:32:21.549447 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:32:21.553383 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 05:32:21.557142 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:32:21.568271 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:32:21.575654 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:32:21.587485 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:32:21.588123 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:32:21.591585 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:32:21.606233 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:32:21.614401 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:32:21.619324 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:32:21.627473 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:32:21.637449 jq[1526]: false Jul 15 05:32:21.639284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:32:21.647347 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:32:21.648772 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:32:21.649217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:32:21.652708 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:32:21.663180 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:32:21.670730 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:32:21.671593 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:32:21.673255 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:32:21.694948 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:32:21.732111 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing passwd entry cache Jul 15 05:32:21.730360 oslogin_cache_refresh[1528]: Refreshing passwd entry cache Jul 15 05:32:21.732986 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:32:21.736563 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:32:21.744197 update_engine[1539]: I20250715 05:32:21.742511 1539 main.cc:92] Flatcar Update Engine starting Jul 15 05:32:21.755308 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting users, quitting Jul 15 05:32:21.755308 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:32:21.755308 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing group entry cache Jul 15 05:32:21.754496 oslogin_cache_refresh[1528]: Failure getting users, quitting Jul 15 05:32:21.754513 oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:32:21.754562 oslogin_cache_refresh[1528]: Refreshing group entry cache Jul 15 05:32:21.756504 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting groups, quitting Jul 15 05:32:21.756551 oslogin_cache_refresh[1528]: Failure getting groups, quitting Jul 15 05:32:21.756619 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:32:21.756653 oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:32:21.768042 jq[1543]: true Jul 15 05:32:21.772894 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:32:21.773366 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:32:21.791711 jq[1565]: true Jul 15 05:32:21.802539 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:32:21.802854 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:32:21.806929 tar[1546]: linux-amd64/LICENSE Jul 15 05:32:21.806929 tar[1546]: linux-amd64/helm Jul 15 05:32:21.822444 dbus-daemon[1522]: [system] SELinux support is enabled Jul 15 05:32:21.824055 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:32:21.827306 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:32:21.829006 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:32:21.829591 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:32:21.829606 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:32:21.835656 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:32:21.847762 extend-filesystems[1527]: Found /dev/sda6 Jul 15 05:32:21.859262 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:32:21.861282 update_engine[1539]: I20250715 05:32:21.861035 1539 update_check_scheduler.cc:74] Next update check in 5m41s Jul 15 05:32:21.868405 extend-filesystems[1527]: Found /dev/sda9 Jul 15 05:32:21.876130 extend-filesystems[1527]: Checking size of /dev/sda9 Jul 15 05:32:21.879371 coreos-metadata[1520]: Jul 15 05:32:21.877 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 15 05:32:21.878062 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:32:21.895323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:32:21.899289 systemd-networkd[1457]: eth0: DHCPv4 address 172.237.155.110/24, gateway 172.237.155.1 acquired from 23.194.118.55 Jul 15 05:32:21.900807 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Jul 15 05:32:21.902060 dbus-daemon[1522]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1457 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 05:32:21.907369 extend-filesystems[1527]: Resized partition /dev/sda9 Jul 15 05:32:21.912521 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 05:32:21.918846 extend-filesystems[1598]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:32:21.923221 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:32:21.924738 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:32:21.936217 systemd[1]: Starting sshkeys.service... Jul 15 05:32:21.950091 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jul 15 05:32:21.964595 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 05:32:21.974931 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 05:32:22.029395 systemd-timesyncd[1430]: Contacted time server 96.245.170.99:123 (0.flatcar.pool.ntp.org). Jul 15 05:32:22.029456 systemd-timesyncd[1430]: Initial clock synchronization to Tue 2025-07-15 05:32:21.990024 UTC. Jul 15 05:32:22.058020 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:32:22.144427 coreos-metadata[1608]: Jul 15 05:32:22.143 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 15 05:32:22.173029 containerd[1551]: time="2025-07-15T05:32:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:32:22.189840 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jul 15 05:32:22.195395 containerd[1551]: time="2025-07-15T05:32:22.195347840Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:32:22.204431 extend-filesystems[1598]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 15 05:32:22.204431 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 10 Jul 15 05:32:22.204431 extend-filesystems[1598]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jul 15 05:32:22.228156 extend-filesystems[1527]: Resized filesystem in /dev/sda9 Jul 15 05:32:22.228617 containerd[1551]: time="2025-07-15T05:32:22.227986070Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.16µs" Jul 15 05:32:22.228617 containerd[1551]: time="2025-07-15T05:32:22.228019620Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:32:22.228617 containerd[1551]: time="2025-07-15T05:32:22.228050530Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:32:22.207715 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:32:22.209311 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:32:22.229119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230184610Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230216270Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230247320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230312750Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230323550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230533810Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230550660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230562220Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230570980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230650620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:32:22.230925 containerd[1551]: time="2025-07-15T05:32:22.230856720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:32:22.231114 containerd[1551]: time="2025-07-15T05:32:22.230885170Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:32:22.231114 containerd[1551]: time="2025-07-15T05:32:22.230894690Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:32:22.231178 containerd[1551]: time="2025-07-15T05:32:22.231164580Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:32:22.231529 containerd[1551]: time="2025-07-15T05:32:22.231516110Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:32:22.231627 containerd[1551]: time="2025-07-15T05:32:22.231614640Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:32:22.235146 containerd[1551]: time="2025-07-15T05:32:22.235126500Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:32:22.235396 containerd[1551]: time="2025-07-15T05:32:22.235381050Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:32:22.235505 containerd[1551]: time="2025-07-15T05:32:22.235490140Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237114030Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237143900Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237160600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237179210Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237196310Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237219310Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237239310Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237251330Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237266360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237372140Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237392160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237409870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237422520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:32:22.238122 containerd[1551]: time="2025-07-15T05:32:22.237434280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237447660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237460100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237471540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237483910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237495660Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237506150Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237575140Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237594030Z" level=info msg="Start snapshots syncer" Jul 15 05:32:22.238319 containerd[1551]: time="2025-07-15T05:32:22.237634870Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:32:22.238444 containerd[1551]: time="2025-07-15T05:32:22.237952870Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:32:22.238444 containerd[1551]: time="2025-07-15T05:32:22.237997810Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:32:22.238550 containerd[1551]: time="2025-07-15T05:32:22.238085710Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239610490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239643120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239654140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239666080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239677520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239687380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239696810Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239717200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239726490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239735580Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239770270Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239783680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:32:22.240089 containerd[1551]: time="2025-07-15T05:32:22.239791510Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239800030Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239806540Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239815600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239825520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239845650Z" level=info msg="runtime interface created" Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239850750Z" level=info msg="created NRI interface" Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239859810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239869090Z" level=info msg="Connect containerd service" Jul 15 05:32:22.240287 containerd[1551]: time="2025-07-15T05:32:22.239890810Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:32:22.242801 containerd[1551]: time="2025-07-15T05:32:22.242554570Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:32:22.263142 coreos-metadata[1608]: Jul 15 05:32:22.262 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jul 15 05:32:22.270159 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:32:22.318711 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:32:22.324229 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:32:22.350738 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:32:22.351172 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:32:22.357207 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:32:22.391907 systemd-logind[1538]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:32:22.393490 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:32:22.393821 systemd-logind[1538]: New seat seat0. Jul 15 05:32:22.399272 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:32:22.402617 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:32:22.407584 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:32:22.410932 coreos-metadata[1608]: Jul 15 05:32:22.408 INFO Fetch successful Jul 15 05:32:22.413195 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:32:22.413859 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:32:22.431183 containerd[1551]: time="2025-07-15T05:32:22.431035200Z" level=info msg="Start subscribing containerd event" Jul 15 05:32:22.431183 containerd[1551]: time="2025-07-15T05:32:22.431136290Z" level=info msg="Start recovering state" Jul 15 05:32:22.431281 containerd[1551]: time="2025-07-15T05:32:22.431252660Z" level=info msg="Start event monitor" Jul 15 05:32:22.431281 containerd[1551]: time="2025-07-15T05:32:22.431279630Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:32:22.431334 containerd[1551]: time="2025-07-15T05:32:22.431294830Z" level=info msg="Start streaming server" Jul 15 05:32:22.431334 containerd[1551]: time="2025-07-15T05:32:22.431307140Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:32:22.431334 containerd[1551]: time="2025-07-15T05:32:22.431315360Z" level=info msg="runtime interface starting up..." Jul 15 05:32:22.431334 containerd[1551]: time="2025-07-15T05:32:22.431321800Z" level=info msg="starting plugins..." Jul 15 05:32:22.431393 containerd[1551]: time="2025-07-15T05:32:22.431337840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:32:22.432910 containerd[1551]: time="2025-07-15T05:32:22.432871550Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:32:22.433216 containerd[1551]: time="2025-07-15T05:32:22.433182040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:32:22.436827 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:32:22.438255 containerd[1551]: time="2025-07-15T05:32:22.438224110Z" level=info msg="containerd successfully booted in 0.268077s" Jul 15 05:32:22.439044 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 05:32:22.440498 update-ssh-keys[1656]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:32:22.440865 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 05:32:22.441749 dbus-daemon[1522]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 05:32:22.443492 systemd[1]: Finished sshkeys.service. Jul 15 05:32:22.449536 dbus-daemon[1522]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1596 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 05:32:22.457123 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 05:32:22.534899 polkitd[1660]: Started polkitd version 126 Jul 15 05:32:22.538997 polkitd[1660]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 05:32:22.539272 polkitd[1660]: Loading rules from directory /run/polkit-1/rules.d Jul 15 05:32:22.539317 polkitd[1660]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:32:22.539503 polkitd[1660]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 05:32:22.539533 polkitd[1660]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:32:22.539567 polkitd[1660]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 05:32:22.540760 polkitd[1660]: Finished loading, compiling and executing 2 rules Jul 15 05:32:22.541139 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 05:32:22.542033 dbus-daemon[1522]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 05:32:22.543068 polkitd[1660]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 05:32:22.550798 systemd-hostnamed[1596]: Hostname set to <172-237-155-110> (transient) Jul 15 05:32:22.550810 systemd-resolved[1412]: System hostname changed to '172-237-155-110'. Jul 15 05:32:22.581180 tar[1546]: linux-amd64/README.md Jul 15 05:32:22.605433 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:32:22.715236 systemd-networkd[1457]: eth0: Gained IPv6LL Jul 15 05:32:22.718136 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:32:22.719218 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:32:22.721953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:32:22.725241 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:32:22.748745 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:32:22.888900 coreos-metadata[1520]: Jul 15 05:32:22.888 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 15 05:32:22.978957 coreos-metadata[1520]: Jul 15 05:32:22.978 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jul 15 05:32:23.160905 coreos-metadata[1520]: Jul 15 05:32:23.160 INFO Fetch successful Jul 15 05:32:23.161050 coreos-metadata[1520]: Jul 15 05:32:23.160 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jul 15 05:32:23.414641 coreos-metadata[1520]: Jul 15 05:32:23.412 INFO Fetch successful Jul 15 05:32:23.527791 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 05:32:23.531029 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:32:23.595269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:32:23.596665 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:32:23.598204 systemd[1]: Startup finished in 2.561s (kernel) + 6.602s (initrd) + 4.404s (userspace) = 13.568s. Jul 15 05:32:23.640418 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:32:24.170317 kubelet[1710]: E0715 05:32:24.170229 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:32:24.175879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:32:24.176114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:32:24.178558 systemd[1]: kubelet.service: Consumed 877ms CPU time, 267.2M memory peak. Jul 15 05:32:26.095563 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:32:26.096598 systemd[1]: Started sshd@0-172.237.155.110:22-139.178.68.195:34464.service - OpenSSH per-connection server daemon (139.178.68.195:34464). Jul 15 05:32:26.442885 sshd[1722]: Accepted publickey for core from 139.178.68.195 port 34464 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:32:26.445350 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:32:26.451967 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:32:26.453177 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:32:26.460260 systemd-logind[1538]: New session 1 of user core. Jul 15 05:32:26.481060 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:32:26.484174 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:32:26.494770 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:32:26.497584 systemd-logind[1538]: New session c1 of user core. Jul 15 05:32:26.615987 systemd[1727]: Queued start job for default target default.target. Jul 15 05:32:26.622046 systemd[1727]: Created slice app.slice - User Application Slice. Jul 15 05:32:26.622089 systemd[1727]: Reached target paths.target - Paths. Jul 15 05:32:26.622130 systemd[1727]: Reached target timers.target - Timers. Jul 15 05:32:26.623353 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:32:26.632973 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:32:26.633032 systemd[1727]: Reached target sockets.target - Sockets. Jul 15 05:32:26.633063 systemd[1727]: Reached target basic.target - Basic System. Jul 15 05:32:26.633127 systemd[1727]: Reached target default.target - Main User Target. Jul 15 05:32:26.633156 systemd[1727]: Startup finished in 127ms. Jul 15 05:32:26.633719 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:32:26.652261 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:32:26.912210 systemd[1]: Started sshd@1-172.237.155.110:22-139.178.68.195:34474.service - OpenSSH per-connection server daemon (139.178.68.195:34474). Jul 15 05:32:27.251054 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 34474 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:32:27.253064 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:32:27.258740 systemd-logind[1538]: New session 2 of user core. Jul 15 05:32:27.269250 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:32:27.503324 sshd[1741]: Connection closed by 139.178.68.195 port 34474 Jul 15 05:32:27.503917 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jul 15 05:32:27.508973 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:32:27.509875 systemd[1]: sshd@1-172.237.155.110:22-139.178.68.195:34474.service: Deactivated successfully. Jul 15 05:32:27.512348 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:32:27.514786 systemd-logind[1538]: Removed session 2. Jul 15 05:32:27.572231 systemd[1]: Started sshd@2-172.237.155.110:22-139.178.68.195:34476.service - OpenSSH per-connection server daemon (139.178.68.195:34476). Jul 15 05:32:27.910445 sshd[1747]: Accepted publickey for core from 139.178.68.195 port 34476 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:32:27.912057 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:32:27.917292 systemd-logind[1538]: New session 3 of user core. Jul 15 05:32:27.923174 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:32:28.158244 sshd[1750]: Connection closed by 139.178.68.195 port 34476 Jul 15 05:32:28.159215 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jul 15 05:32:28.164358 systemd[1]: sshd@2-172.237.155.110:22-139.178.68.195:34476.service: Deactivated successfully. Jul 15 05:32:28.167723 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:32:28.168575 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:32:28.170436 systemd-logind[1538]: Removed session 3. Jul 15 05:32:28.223247 systemd[1]: Started sshd@3-172.237.155.110:22-139.178.68.195:34482.service - OpenSSH per-connection server daemon (139.178.68.195:34482). Jul 15 05:32:28.566994 sshd[1756]: Accepted publickey for core from 139.178.68.195 port 34482 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:32:28.569449 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:32:28.575391 systemd-logind[1538]: New session 4 of user core. Jul 15 05:32:28.586211 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:32:28.819838 sshd[1759]: Connection closed by 139.178.68.195 port 34482 Jul 15 05:32:28.820334 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jul 15 05:32:28.825383 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:32:28.826300 systemd[1]: sshd@3-172.237.155.110:22-139.178.68.195:34482.service: Deactivated successfully. Jul 15 05:32:28.828894 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:32:28.831599 systemd-logind[1538]: Removed session 4. Jul 15 05:32:28.892864 systemd[1]: Started sshd@4-172.237.155.110:22-139.178.68.195:34486.service - OpenSSH per-connection server daemon (139.178.68.195:34486). Jul 15 05:32:29.245888 sshd[1765]: Accepted publickey for core from 139.178.68.195 port 34486 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:32:29.247944 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:32:29.253538 systemd-logind[1538]: New session 5 of user core. Jul 15 05:32:29.269199 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:32:29.453750 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:32:29.454199 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:32:29.469673 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 15 05:32:29.521283 sshd[1768]: Connection closed by 139.178.68.195 port 34486 Jul 15 05:32:29.521929 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 15 05:32:29.525874 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:32:29.526130 systemd[1]: sshd@4-172.237.155.110:22-139.178.68.195:34486.service: Deactivated successfully. Jul 15 05:32:29.527520 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:32:29.528808 systemd-logind[1538]: Removed session 5. Jul 15 05:32:29.576170 systemd[1]: Started sshd@5-172.237.155.110:22-139.178.68.195:34500.service - OpenSSH per-connection server daemon (139.178.68.195:34500). Jul 15 05:32:29.905658 sshd[1775]: Accepted publickey for core from 139.178.68.195 port 34500 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:32:29.906413 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:32:29.910281 systemd-logind[1538]: New session 6 of user core. Jul 15 05:32:29.917175 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:32:30.099661 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:32:30.099915 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:32:30.103398 sudo[1780]: pam_unix(sudo:session): session closed for user root Jul 15 05:32:30.107645 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:32:30.107872 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:32:30.115631 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:32:30.144872 augenrules[1802]: No rules Jul 15 05:32:30.145995 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:32:30.146278 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:32:30.147018 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 15 05:32:30.197046 sshd[1778]: Connection closed by 139.178.68.195 port 34500 Jul 15 05:32:30.197801 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jul 15 05:32:30.201452 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:32:30.201617 systemd[1]: sshd@5-172.237.155.110:22-139.178.68.195:34500.service: Deactivated successfully. Jul 15 05:32:30.203240 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:32:30.204463 systemd-logind[1538]: Removed session 6. Jul 15 05:32:30.265427 systemd[1]: Started sshd@6-172.237.155.110:22-139.178.68.195:49102.service - OpenSSH per-connection server daemon (139.178.68.195:49102). Jul 15 05:32:30.606854 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 49102 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:32:30.608143 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:32:30.612259 systemd-logind[1538]: New session 7 of user core. Jul 15 05:32:30.618167 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:32:30.802655 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:32:30.802892 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:32:31.028608 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:32:31.037363 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:32:31.189214 dockerd[1833]: time="2025-07-15T05:32:31.189160934Z" level=info msg="Starting up" Jul 15 05:32:31.189784 dockerd[1833]: time="2025-07-15T05:32:31.189754933Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:32:31.198144 dockerd[1833]: time="2025-07-15T05:32:31.198122464Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:32:31.232622 dockerd[1833]: time="2025-07-15T05:32:31.232592592Z" level=info msg="Loading containers: start." Jul 15 05:32:31.241091 kernel: Initializing XFRM netlink socket Jul 15 05:32:31.440453 systemd-networkd[1457]: docker0: Link UP Jul 15 05:32:31.442796 dockerd[1833]: time="2025-07-15T05:32:31.442764307Z" level=info msg="Loading containers: done." Jul 15 05:32:31.453558 dockerd[1833]: time="2025-07-15T05:32:31.453530731Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:32:31.453660 dockerd[1833]: time="2025-07-15T05:32:31.453578740Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:32:31.453660 dockerd[1833]: time="2025-07-15T05:32:31.453640570Z" level=info msg="Initializing buildkit" Jul 15 05:32:31.468039 dockerd[1833]: time="2025-07-15T05:32:31.468019756Z" level=info msg="Completed buildkit initialization" Jul 15 05:32:31.472134 dockerd[1833]: time="2025-07-15T05:32:31.472103189Z" level=info msg="Daemon has completed initialization" Jul 15 05:32:31.472232 dockerd[1833]: time="2025-07-15T05:32:31.472190531Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:32:31.474108 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:32:31.888892 containerd[1551]: time="2025-07-15T05:32:31.888587366Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 05:32:32.713524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471445181.mount: Deactivated successfully. Jul 15 05:32:33.688585 containerd[1551]: time="2025-07-15T05:32:33.688512566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:33.689397 containerd[1551]: time="2025-07-15T05:32:33.689301789Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 15 05:32:33.689950 containerd[1551]: time="2025-07-15T05:32:33.689920572Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:33.691647 containerd[1551]: time="2025-07-15T05:32:33.691615788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:33.692565 containerd[1551]: time="2025-07-15T05:32:33.692320999Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.803686481s" Jul 15 05:32:33.692565 containerd[1551]: time="2025-07-15T05:32:33.692354226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 15 05:32:33.693198 containerd[1551]: time="2025-07-15T05:32:33.693172232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 05:32:34.427793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:32:34.431186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:32:34.585912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:32:34.588733 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:32:34.619329 kubelet[2104]: E0715 05:32:34.619281 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:32:34.623513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:32:34.623682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:32:34.623996 systemd[1]: kubelet.service: Consumed 161ms CPU time, 110.7M memory peak. Jul 15 05:32:35.092634 containerd[1551]: time="2025-07-15T05:32:35.092580899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:35.093521 containerd[1551]: time="2025-07-15T05:32:35.093313859Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 15 05:32:35.094057 containerd[1551]: time="2025-07-15T05:32:35.094027641Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:35.095946 containerd[1551]: time="2025-07-15T05:32:35.095920978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:35.096681 containerd[1551]: time="2025-07-15T05:32:35.096653129Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.40339198s" Jul 15 05:32:35.096750 containerd[1551]: time="2025-07-15T05:32:35.096738032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 15 05:32:35.097440 containerd[1551]: time="2025-07-15T05:32:35.097355933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 05:32:36.277774 containerd[1551]: time="2025-07-15T05:32:36.277715563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:36.278570 containerd[1551]: time="2025-07-15T05:32:36.278430184Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 15 05:32:36.279179 containerd[1551]: time="2025-07-15T05:32:36.278994125Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:36.280613 containerd[1551]: time="2025-07-15T05:32:36.280571241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:36.281493 containerd[1551]: time="2025-07-15T05:32:36.281383339Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.184004083s" Jul 15 05:32:36.281493 containerd[1551]: time="2025-07-15T05:32:36.281408243Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 15 05:32:36.281880 containerd[1551]: time="2025-07-15T05:32:36.281852941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 05:32:37.432556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653937470.mount: Deactivated successfully. Jul 15 05:32:37.737803 containerd[1551]: time="2025-07-15T05:32:37.737673737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:37.738644 containerd[1551]: time="2025-07-15T05:32:37.738618537Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 15 05:32:37.739888 containerd[1551]: time="2025-07-15T05:32:37.739021576Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:37.740415 containerd[1551]: time="2025-07-15T05:32:37.740389395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:37.740904 containerd[1551]: time="2025-07-15T05:32:37.740876960Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.458953184s" Jul 15 05:32:37.740966 containerd[1551]: time="2025-07-15T05:32:37.740953304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 15 05:32:37.741552 containerd[1551]: time="2025-07-15T05:32:37.741513157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 05:32:38.469490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067985075.mount: Deactivated successfully. Jul 15 05:32:39.163613 containerd[1551]: time="2025-07-15T05:32:39.163567959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:39.164883 containerd[1551]: time="2025-07-15T05:32:39.164861698Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 15 05:32:39.164984 containerd[1551]: time="2025-07-15T05:32:39.164965957Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:39.166677 containerd[1551]: time="2025-07-15T05:32:39.166658457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:39.167412 containerd[1551]: time="2025-07-15T05:32:39.167370924Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.425832451s" Jul 15 05:32:39.167443 containerd[1551]: time="2025-07-15T05:32:39.167414746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 15 05:32:39.168011 containerd[1551]: time="2025-07-15T05:32:39.167992171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:32:39.832687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789833256.mount: Deactivated successfully. Jul 15 05:32:39.837438 containerd[1551]: time="2025-07-15T05:32:39.837384907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:32:39.838224 containerd[1551]: time="2025-07-15T05:32:39.838184278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:32:39.838828 containerd[1551]: time="2025-07-15T05:32:39.838781286Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:32:39.840220 containerd[1551]: time="2025-07-15T05:32:39.840196978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:32:39.840869 containerd[1551]: time="2025-07-15T05:32:39.840843103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 672.826204ms" Jul 15 05:32:39.840903 containerd[1551]: time="2025-07-15T05:32:39.840870139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:32:39.841498 containerd[1551]: time="2025-07-15T05:32:39.841448623Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 05:32:40.502356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359526129.mount: Deactivated successfully. Jul 15 05:32:41.726662 containerd[1551]: time="2025-07-15T05:32:41.726586046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:41.727595 containerd[1551]: time="2025-07-15T05:32:41.727491970Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 15 05:32:41.728022 containerd[1551]: time="2025-07-15T05:32:41.727994784Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:41.729911 containerd[1551]: time="2025-07-15T05:32:41.729882283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:41.731300 containerd[1551]: time="2025-07-15T05:32:41.731251891Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.88977628s" Jul 15 05:32:41.731300 containerd[1551]: time="2025-07-15T05:32:41.731287863Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 15 05:32:44.211592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:32:44.211704 systemd[1]: kubelet.service: Consumed 161ms CPU time, 110.7M memory peak. Jul 15 05:32:44.213379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:32:44.236816 systemd[1]: Reload requested from client PID 2267 ('systemctl') (unit session-7.scope)... Jul 15 05:32:44.236907 systemd[1]: Reloading... Jul 15 05:32:44.360894 zram_generator::config[2311]: No configuration found. Jul 15 05:32:44.442510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:32:44.527307 systemd[1]: Reloading finished in 290 ms. Jul 15 05:32:44.576541 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:32:44.576624 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:32:44.576886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:32:44.576924 systemd[1]: kubelet.service: Consumed 116ms CPU time, 98.3M memory peak. Jul 15 05:32:44.578060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:32:44.725035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:32:44.731316 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:32:44.768097 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:32:44.768097 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:32:44.768097 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:32:44.768097 kubelet[2365]: I0715 05:32:44.767585 2365 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:32:45.086209 kubelet[2365]: I0715 05:32:45.086184 2365 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 05:32:45.086209 kubelet[2365]: I0715 05:32:45.086203 2365 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:32:45.086378 kubelet[2365]: I0715 05:32:45.086367 2365 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 05:32:45.109092 kubelet[2365]: E0715 05:32:45.109046 2365 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.237.155.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.155.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 05:32:45.109092 kubelet[2365]: I0715 05:32:45.109091 2365 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:32:45.116323 kubelet[2365]: I0715 05:32:45.116308 2365 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:32:45.119563 kubelet[2365]: I0715 05:32:45.119547 2365 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:32:45.119761 kubelet[2365]: I0715 05:32:45.119739 2365 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:32:45.119871 kubelet[2365]: I0715 05:32:45.119759 2365 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-155-110","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:32:45.119958 kubelet[2365]: I0715 05:32:45.119874 2365 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:32:45.119958 kubelet[2365]: I0715 05:32:45.119880 2365 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 05:32:45.120642 kubelet[2365]: I0715 05:32:45.120624 2365 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:32:45.123427 kubelet[2365]: I0715 05:32:45.123213 2365 kubelet.go:480] "Attempting to sync node with API server" Jul 15 05:32:45.123427 kubelet[2365]: I0715 05:32:45.123242 2365 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:32:45.123427 kubelet[2365]: I0715 05:32:45.123264 2365 kubelet.go:386] "Adding apiserver pod source" Jul 15 05:32:45.123427 kubelet[2365]: I0715 05:32:45.123283 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:32:45.126385 kubelet[2365]: E0715 05:32:45.126111 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.237.155.110:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-155-110&limit=500&resourceVersion=0\": dial tcp 172.237.155.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 05:32:45.128481 kubelet[2365]: E0715 05:32:45.128462 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.155.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.155.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 05:32:45.129199 kubelet[2365]: I0715 05:32:45.129182 2365 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:32:45.129901 kubelet[2365]: I0715 05:32:45.129548 2365 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 05:32:45.131054 kubelet[2365]: W0715 05:32:45.131030 2365 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:32:45.133681 kubelet[2365]: I0715 05:32:45.133664 2365 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:32:45.133719 kubelet[2365]: I0715 05:32:45.133709 2365 server.go:1289] "Started kubelet" Jul 15 05:32:45.134679 kubelet[2365]: I0715 05:32:45.134261 2365 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:32:45.135147 kubelet[2365]: I0715 05:32:45.135127 2365 server.go:317] "Adding debug handlers to kubelet server" Jul 15 05:32:45.137249 kubelet[2365]: I0715 05:32:45.136816 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:32:45.137249 kubelet[2365]: I0715 05:32:45.137050 2365 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:32:45.138023 kubelet[2365]: E0715 05:32:45.137148 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.155.110:6443/api/v1/namespaces/default/events\": dial tcp 172.237.155.110:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-155-110.185255d1ee3773c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-155-110,UID:172-237-155-110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-155-110,},FirstTimestamp:2025-07-15 05:32:45.133681604 +0000 UTC m=+0.395530989,LastTimestamp:2025-07-15 05:32:45.133681604 +0000 UTC m=+0.395530989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-155-110,}" Jul 15 05:32:45.139169 kubelet[2365]: I0715 05:32:45.139155 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:32:45.139300 kubelet[2365]: I0715 05:32:45.139288 2365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:32:45.141581 kubelet[2365]: E0715 05:32:45.140760 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-155-110\" not found" Jul 15 05:32:45.141581 kubelet[2365]: I0715 05:32:45.140784 2365 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:32:45.141581 kubelet[2365]: I0715 05:32:45.140889 2365 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:32:45.141581 kubelet[2365]: I0715 05:32:45.140926 2365 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:32:45.141746 kubelet[2365]: E0715 05:32:45.141726 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.237.155.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.155.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 05:32:45.142117 kubelet[2365]: I0715 05:32:45.142105 2365 factory.go:223] Registration of the systemd container factory successfully Jul 15 05:32:45.142216 kubelet[2365]: I0715 05:32:45.142203 2365 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:32:45.143761 kubelet[2365]: I0715 05:32:45.143750 2365 factory.go:223] Registration of the containerd container factory successfully Jul 15 05:32:45.162413 kubelet[2365]: I0715 05:32:45.162366 2365 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 05:32:45.163320 kubelet[2365]: I0715 05:32:45.163288 2365 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 05:32:45.163320 kubelet[2365]: I0715 05:32:45.163305 2365 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 05:32:45.163320 kubelet[2365]: I0715 05:32:45.163319 2365 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:32:45.163386 kubelet[2365]: I0715 05:32:45.163325 2365 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 05:32:45.163386 kubelet[2365]: E0715 05:32:45.163358 2365 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:32:45.165832 kubelet[2365]: E0715 05:32:45.165782 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.155.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-155-110?timeout=10s\": dial tcp 172.237.155.110:6443: connect: connection refused" interval="200ms" Jul 15 05:32:45.166890 kubelet[2365]: I0715 05:32:45.166879 2365 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:32:45.166951 kubelet[2365]: I0715 05:32:45.166943 2365 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:32:45.166988 kubelet[2365]: I0715 05:32:45.166982 2365 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:32:45.169067 kubelet[2365]: I0715 05:32:45.169056 2365 policy_none.go:49] "None policy: Start" Jul 15 05:32:45.169143 kubelet[2365]: I0715 05:32:45.169135 2365 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:32:45.169179 kubelet[2365]: I0715 05:32:45.169173 2365 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:32:45.169474 kubelet[2365]: E0715 05:32:45.169455 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.237.155.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.155.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 05:32:45.173596 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:32:45.186323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:32:45.188674 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:32:45.197777 kubelet[2365]: E0715 05:32:45.197760 2365 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 05:32:45.198605 kubelet[2365]: I0715 05:32:45.198218 2365 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:32:45.198605 kubelet[2365]: I0715 05:32:45.198234 2365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:32:45.198605 kubelet[2365]: I0715 05:32:45.198453 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:32:45.199778 kubelet[2365]: E0715 05:32:45.199759 2365 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:32:45.200150 kubelet[2365]: E0715 05:32:45.200122 2365 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-155-110\" not found" Jul 15 05:32:45.276437 systemd[1]: Created slice kubepods-burstable-podcd832213606fa568a48ce36b723104fa.slice - libcontainer container kubepods-burstable-podcd832213606fa568a48ce36b723104fa.slice. Jul 15 05:32:45.285687 kubelet[2365]: E0715 05:32:45.285657 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:45.289028 systemd[1]: Created slice kubepods-burstable-pod3ad1456f7c5ddbd01581be04da04a260.slice - libcontainer container kubepods-burstable-pod3ad1456f7c5ddbd01581be04da04a260.slice. Jul 15 05:32:45.296126 kubelet[2365]: E0715 05:32:45.296110 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:45.297422 systemd[1]: Created slice kubepods-burstable-pod10343bc0b2f631d4dbccd874b610f69a.slice - libcontainer container kubepods-burstable-pod10343bc0b2f631d4dbccd874b610f69a.slice. Jul 15 05:32:45.298775 kubelet[2365]: E0715 05:32:45.298757 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:45.299596 kubelet[2365]: I0715 05:32:45.299571 2365 kubelet_node_status.go:75] "Attempting to register node" node="172-237-155-110" Jul 15 05:32:45.299815 kubelet[2365]: E0715 05:32:45.299800 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.155.110:6443/api/v1/nodes\": dial tcp 172.237.155.110:6443: connect: connection refused" node="172-237-155-110" Jul 15 05:32:45.342133 kubelet[2365]: I0715 05:32:45.342056 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10343bc0b2f631d4dbccd874b610f69a-kubeconfig\") pod \"kube-scheduler-172-237-155-110\" (UID: \"10343bc0b2f631d4dbccd874b610f69a\") " pod="kube-system/kube-scheduler-172-237-155-110" Jul 15 05:32:45.342272 kubelet[2365]: I0715 05:32:45.342234 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd832213606fa568a48ce36b723104fa-k8s-certs\") pod \"kube-apiserver-172-237-155-110\" (UID: \"cd832213606fa568a48ce36b723104fa\") " pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:45.342396 kubelet[2365]: I0715 05:32:45.342329 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd832213606fa568a48ce36b723104fa-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-155-110\" (UID: \"cd832213606fa568a48ce36b723104fa\") " pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:45.342396 kubelet[2365]: I0715 05:32:45.342352 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd832213606fa568a48ce36b723104fa-ca-certs\") pod \"kube-apiserver-172-237-155-110\" (UID: \"cd832213606fa568a48ce36b723104fa\") " pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:45.342396 kubelet[2365]: I0715 05:32:45.342364 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-ca-certs\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:45.342539 kubelet[2365]: I0715 05:32:45.342382 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-flexvolume-dir\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:45.342539 kubelet[2365]: I0715 05:32:45.342480 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-k8s-certs\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:45.342539 kubelet[2365]: I0715 05:32:45.342493 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-kubeconfig\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:45.342539 kubelet[2365]: I0715 05:32:45.342504 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:45.366572 kubelet[2365]: E0715 05:32:45.366540 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.155.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-155-110?timeout=10s\": dial tcp 172.237.155.110:6443: connect: connection refused" interval="400ms" Jul 15 05:32:45.501264 kubelet[2365]: I0715 05:32:45.501246 2365 kubelet_node_status.go:75] "Attempting to register node" node="172-237-155-110" Jul 15 05:32:45.501440 kubelet[2365]: E0715 05:32:45.501424 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.155.110:6443/api/v1/nodes\": dial tcp 172.237.155.110:6443: connect: connection refused" node="172-237-155-110" Jul 15 05:32:45.586342 kubelet[2365]: E0715 05:32:45.586318 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:45.586828 containerd[1551]: time="2025-07-15T05:32:45.586794957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-155-110,Uid:cd832213606fa568a48ce36b723104fa,Namespace:kube-system,Attempt:0,}" Jul 15 05:32:45.598325 kubelet[2365]: E0715 05:32:45.597346 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:45.598692 containerd[1551]: time="2025-07-15T05:32:45.598490305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-155-110,Uid:3ad1456f7c5ddbd01581be04da04a260,Namespace:kube-system,Attempt:0,}" Jul 15 05:32:45.600101 kubelet[2365]: E0715 05:32:45.599818 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:45.601643 containerd[1551]: time="2025-07-15T05:32:45.601615680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-155-110,Uid:10343bc0b2f631d4dbccd874b610f69a,Namespace:kube-system,Attempt:0,}" Jul 15 05:32:45.606257 containerd[1551]: time="2025-07-15T05:32:45.606225414Z" level=info msg="connecting to shim 0e822d8d04200aabe144ba3f3087c858098134c40716ba083aa4fe348b6bbc9a" address="unix:///run/containerd/s/389a133ba4419a04f0e6b383cc3a39a75776fe8f7161eeb29906c194ac6537bc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:32:45.630513 containerd[1551]: time="2025-07-15T05:32:45.630486825Z" level=info msg="connecting to shim c648aa22d76926e8a3498420a04ce84b83d44e5384dedfe816b6de32fc734a62" address="unix:///run/containerd/s/178cc3c1b2cfea9fd518a2fedd74c9b33191dae5ef9f1bc93402b65648fbd170" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:32:45.644844 containerd[1551]: time="2025-07-15T05:32:45.644812253Z" level=info msg="connecting to shim feb7c13906a7bbcec22702095dc5630b0f27b001dcf7312b55918cb7fa37e26e" address="unix:///run/containerd/s/ad3f316a3e5e96b7fc235c80e1159d91d93e1ed871d6adda292256524d1af8de" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:32:45.654259 systemd[1]: Started cri-containerd-0e822d8d04200aabe144ba3f3087c858098134c40716ba083aa4fe348b6bbc9a.scope - libcontainer container 0e822d8d04200aabe144ba3f3087c858098134c40716ba083aa4fe348b6bbc9a. Jul 15 05:32:45.661957 systemd[1]: Started cri-containerd-c648aa22d76926e8a3498420a04ce84b83d44e5384dedfe816b6de32fc734a62.scope - libcontainer container c648aa22d76926e8a3498420a04ce84b83d44e5384dedfe816b6de32fc734a62. Jul 15 05:32:45.668847 systemd[1]: Started cri-containerd-feb7c13906a7bbcec22702095dc5630b0f27b001dcf7312b55918cb7fa37e26e.scope - libcontainer container feb7c13906a7bbcec22702095dc5630b0f27b001dcf7312b55918cb7fa37e26e. Jul 15 05:32:45.681962 kubelet[2365]: E0715 05:32:45.681765 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.155.110:6443/api/v1/namespaces/default/events\": dial tcp 172.237.155.110:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-155-110.185255d1ee3773c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-155-110,UID:172-237-155-110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-155-110,},FirstTimestamp:2025-07-15 05:32:45.133681604 +0000 UTC m=+0.395530989,LastTimestamp:2025-07-15 05:32:45.133681604 +0000 UTC m=+0.395530989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-155-110,}" Jul 15 05:32:45.721188 containerd[1551]: time="2025-07-15T05:32:45.721010998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-155-110,Uid:3ad1456f7c5ddbd01581be04da04a260,Namespace:kube-system,Attempt:0,} returns sandbox id \"c648aa22d76926e8a3498420a04ce84b83d44e5384dedfe816b6de32fc734a62\"" Jul 15 05:32:45.723751 kubelet[2365]: E0715 05:32:45.723718 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:45.728103 containerd[1551]: time="2025-07-15T05:32:45.727826353Z" level=info msg="CreateContainer within sandbox \"c648aa22d76926e8a3498420a04ce84b83d44e5384dedfe816b6de32fc734a62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:32:45.729244 containerd[1551]: time="2025-07-15T05:32:45.729225852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-155-110,Uid:cd832213606fa568a48ce36b723104fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e822d8d04200aabe144ba3f3087c858098134c40716ba083aa4fe348b6bbc9a\"" Jul 15 05:32:45.729894 kubelet[2365]: E0715 05:32:45.729863 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:45.732236 containerd[1551]: time="2025-07-15T05:32:45.732219026Z" level=info msg="CreateContainer within sandbox \"0e822d8d04200aabe144ba3f3087c858098134c40716ba083aa4fe348b6bbc9a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:32:45.739737 containerd[1551]: time="2025-07-15T05:32:45.739651705Z" level=info msg="Container 84ed2376bf8fbfb64600545a4517f425c81e51acf55f9a9439c8bc4a48d51982: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:32:45.742675 containerd[1551]: time="2025-07-15T05:32:45.742656881Z" level=info msg="Container 7df65fa044b936683f5d00796fdd5c34a3019f0de07d88d4cc9101343d751314: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:32:45.752888 containerd[1551]: time="2025-07-15T05:32:45.750968628Z" level=info msg="CreateContainer within sandbox \"c648aa22d76926e8a3498420a04ce84b83d44e5384dedfe816b6de32fc734a62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"84ed2376bf8fbfb64600545a4517f425c81e51acf55f9a9439c8bc4a48d51982\"" Jul 15 05:32:45.754165 containerd[1551]: time="2025-07-15T05:32:45.753039499Z" level=info msg="StartContainer for \"84ed2376bf8fbfb64600545a4517f425c81e51acf55f9a9439c8bc4a48d51982\"" Jul 15 05:32:45.754165 containerd[1551]: time="2025-07-15T05:32:45.753926492Z" level=info msg="connecting to shim 84ed2376bf8fbfb64600545a4517f425c81e51acf55f9a9439c8bc4a48d51982" address="unix:///run/containerd/s/178cc3c1b2cfea9fd518a2fedd74c9b33191dae5ef9f1bc93402b65648fbd170" protocol=ttrpc version=3 Jul 15 05:32:45.759016 containerd[1551]: time="2025-07-15T05:32:45.758707695Z" level=info msg="CreateContainer within sandbox \"0e822d8d04200aabe144ba3f3087c858098134c40716ba083aa4fe348b6bbc9a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7df65fa044b936683f5d00796fdd5c34a3019f0de07d88d4cc9101343d751314\"" Jul 15 05:32:45.767172 containerd[1551]: time="2025-07-15T05:32:45.764485955Z" level=info msg="StartContainer for \"7df65fa044b936683f5d00796fdd5c34a3019f0de07d88d4cc9101343d751314\"" Jul 15 05:32:45.767644 kubelet[2365]: E0715 05:32:45.767618 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.155.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-155-110?timeout=10s\": dial tcp 172.237.155.110:6443: connect: connection refused" interval="800ms" Jul 15 05:32:45.769060 containerd[1551]: time="2025-07-15T05:32:45.768410506Z" level=info msg="connecting to shim 7df65fa044b936683f5d00796fdd5c34a3019f0de07d88d4cc9101343d751314" address="unix:///run/containerd/s/389a133ba4419a04f0e6b383cc3a39a75776fe8f7161eeb29906c194ac6537bc" protocol=ttrpc version=3 Jul 15 05:32:45.774612 containerd[1551]: time="2025-07-15T05:32:45.774478055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-155-110,Uid:10343bc0b2f631d4dbccd874b610f69a,Namespace:kube-system,Attempt:0,} returns sandbox id \"feb7c13906a7bbcec22702095dc5630b0f27b001dcf7312b55918cb7fa37e26e\"" Jul 15 05:32:45.778003 kubelet[2365]: E0715 05:32:45.777946 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:45.790829 containerd[1551]: time="2025-07-15T05:32:45.789438885Z" level=info msg="CreateContainer within sandbox \"feb7c13906a7bbcec22702095dc5630b0f27b001dcf7312b55918cb7fa37e26e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:32:45.790344 systemd[1]: Started cri-containerd-84ed2376bf8fbfb64600545a4517f425c81e51acf55f9a9439c8bc4a48d51982.scope - libcontainer container 84ed2376bf8fbfb64600545a4517f425c81e51acf55f9a9439c8bc4a48d51982. Jul 15 05:32:45.826379 systemd[1]: Started cri-containerd-7df65fa044b936683f5d00796fdd5c34a3019f0de07d88d4cc9101343d751314.scope - libcontainer container 7df65fa044b936683f5d00796fdd5c34a3019f0de07d88d4cc9101343d751314. Jul 15 05:32:45.827887 containerd[1551]: time="2025-07-15T05:32:45.826964374Z" level=info msg="Container 3886969d2ce76cc32d08742256b5d3ee1c473e6da57fce8cde496f4f5ed189a5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:32:45.844376 containerd[1551]: time="2025-07-15T05:32:45.842191436Z" level=info msg="CreateContainer within sandbox \"feb7c13906a7bbcec22702095dc5630b0f27b001dcf7312b55918cb7fa37e26e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3886969d2ce76cc32d08742256b5d3ee1c473e6da57fce8cde496f4f5ed189a5\"" Jul 15 05:32:45.844376 containerd[1551]: time="2025-07-15T05:32:45.842832675Z" level=info msg="StartContainer for \"3886969d2ce76cc32d08742256b5d3ee1c473e6da57fce8cde496f4f5ed189a5\"" Jul 15 05:32:45.845712 containerd[1551]: time="2025-07-15T05:32:45.845139576Z" level=info msg="connecting to shim 3886969d2ce76cc32d08742256b5d3ee1c473e6da57fce8cde496f4f5ed189a5" address="unix:///run/containerd/s/ad3f316a3e5e96b7fc235c80e1159d91d93e1ed871d6adda292256524d1af8de" protocol=ttrpc version=3 Jul 15 05:32:45.885208 systemd[1]: Started cri-containerd-3886969d2ce76cc32d08742256b5d3ee1c473e6da57fce8cde496f4f5ed189a5.scope - libcontainer container 3886969d2ce76cc32d08742256b5d3ee1c473e6da57fce8cde496f4f5ed189a5. Jul 15 05:32:45.893124 containerd[1551]: time="2025-07-15T05:32:45.892302465Z" level=info msg="StartContainer for \"84ed2376bf8fbfb64600545a4517f425c81e51acf55f9a9439c8bc4a48d51982\" returns successfully" Jul 15 05:32:45.904657 kubelet[2365]: I0715 05:32:45.904629 2365 kubelet_node_status.go:75] "Attempting to register node" node="172-237-155-110" Jul 15 05:32:45.905457 kubelet[2365]: E0715 05:32:45.905432 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.155.110:6443/api/v1/nodes\": dial tcp 172.237.155.110:6443: connect: connection refused" node="172-237-155-110" Jul 15 05:32:45.927329 containerd[1551]: time="2025-07-15T05:32:45.927287970Z" level=info msg="StartContainer for \"7df65fa044b936683f5d00796fdd5c34a3019f0de07d88d4cc9101343d751314\" returns successfully" Jul 15 05:32:45.994350 containerd[1551]: time="2025-07-15T05:32:45.994319167Z" level=info msg="StartContainer for \"3886969d2ce76cc32d08742256b5d3ee1c473e6da57fce8cde496f4f5ed189a5\" returns successfully" Jul 15 05:32:46.174778 kubelet[2365]: E0715 05:32:46.174681 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:46.174842 kubelet[2365]: E0715 05:32:46.174790 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:46.176100 kubelet[2365]: E0715 05:32:46.176067 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:46.176179 kubelet[2365]: E0715 05:32:46.176158 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:46.178934 kubelet[2365]: E0715 05:32:46.178910 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:46.179031 kubelet[2365]: E0715 05:32:46.179010 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:46.708740 kubelet[2365]: I0715 05:32:46.708713 2365 kubelet_node_status.go:75] "Attempting to register node" node="172-237-155-110" Jul 15 05:32:47.128169 kubelet[2365]: I0715 05:32:47.127397 2365 apiserver.go:52] "Watching apiserver" Jul 15 05:32:47.132288 kubelet[2365]: E0715 05:32:47.132260 2365 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:47.141742 kubelet[2365]: I0715 05:32:47.141717 2365 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:32:47.180177 kubelet[2365]: E0715 05:32:47.180154 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:47.180262 kubelet[2365]: E0715 05:32:47.180241 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:47.180423 kubelet[2365]: E0715 05:32:47.180402 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-155-110\" not found" node="172-237-155-110" Jul 15 05:32:47.180488 kubelet[2365]: E0715 05:32:47.180469 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:47.247579 kubelet[2365]: I0715 05:32:47.247555 2365 kubelet_node_status.go:78] "Successfully registered node" node="172-237-155-110" Jul 15 05:32:47.251088 kubelet[2365]: I0715 05:32:47.250152 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:47.277242 kubelet[2365]: E0715 05:32:47.277215 2365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-155-110\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:47.277279 kubelet[2365]: I0715 05:32:47.277251 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:47.280380 kubelet[2365]: E0715 05:32:47.280354 2365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-155-110\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:47.280380 kubelet[2365]: I0715 05:32:47.280372 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-155-110" Jul 15 05:32:47.283052 kubelet[2365]: E0715 05:32:47.283015 2365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-155-110\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-155-110" Jul 15 05:32:48.768678 kubelet[2365]: I0715 05:32:48.768634 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-155-110" Jul 15 05:32:48.772653 kubelet[2365]: E0715 05:32:48.772607 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:49.175049 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-7.scope)... Jul 15 05:32:49.175065 systemd[1]: Reloading... Jul 15 05:32:49.183982 kubelet[2365]: E0715 05:32:49.183775 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:49.260110 zram_generator::config[2682]: No configuration found. Jul 15 05:32:49.360009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:32:49.456669 systemd[1]: Reloading finished in 281 ms. Jul 15 05:32:49.486058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:32:49.502536 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:32:49.502856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:32:49.502906 systemd[1]: kubelet.service: Consumed 719ms CPU time, 129.3M memory peak. Jul 15 05:32:49.504487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:32:49.666724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:32:49.670335 (kubelet)[2737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:32:49.705975 kubelet[2737]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:32:49.707153 kubelet[2737]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:32:49.707204 kubelet[2737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:32:49.708121 kubelet[2737]: I0715 05:32:49.707312 2737 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:32:49.713056 kubelet[2737]: I0715 05:32:49.713042 2737 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 05:32:49.713148 kubelet[2737]: I0715 05:32:49.713139 2737 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:32:49.713299 kubelet[2737]: I0715 05:32:49.713289 2737 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 05:32:49.714056 kubelet[2737]: I0715 05:32:49.714045 2737 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 05:32:49.715697 kubelet[2737]: I0715 05:32:49.715676 2737 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:32:49.731549 kubelet[2737]: I0715 05:32:49.731516 2737 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:32:49.736057 kubelet[2737]: I0715 05:32:49.736035 2737 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:32:49.736325 kubelet[2737]: I0715 05:32:49.736309 2737 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:32:49.736464 kubelet[2737]: I0715 05:32:49.736360 2737 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-155-110","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:32:49.736560 kubelet[2737]: I0715 05:32:49.736552 2737 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:32:49.736607 kubelet[2737]: I0715 05:32:49.736600 2737 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 05:32:49.736675 kubelet[2737]: I0715 05:32:49.736668 2737 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:32:49.736845 kubelet[2737]: I0715 05:32:49.736836 2737 kubelet.go:480] "Attempting to sync node with API server" Jul 15 05:32:49.737915 kubelet[2737]: I0715 05:32:49.737903 2737 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:32:49.737993 kubelet[2737]: I0715 05:32:49.737986 2737 kubelet.go:386] "Adding apiserver pod source" Jul 15 05:32:49.745129 kubelet[2737]: I0715 05:32:49.745116 2737 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:32:49.746045 kubelet[2737]: I0715 05:32:49.746030 2737 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:32:49.746423 kubelet[2737]: I0715 05:32:49.746410 2737 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 05:32:49.754634 kubelet[2737]: I0715 05:32:49.754622 2737 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:32:49.754726 kubelet[2737]: I0715 05:32:49.754718 2737 server.go:1289] "Started kubelet" Jul 15 05:32:49.756494 kubelet[2737]: I0715 05:32:49.756449 2737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:32:49.756576 kubelet[2737]: I0715 05:32:49.754953 2737 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:32:49.757183 kubelet[2737]: I0715 05:32:49.757113 2737 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:32:49.759542 kubelet[2737]: I0715 05:32:49.759519 2737 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:32:49.760284 kubelet[2737]: I0715 05:32:49.760272 2737 server.go:317] "Adding debug handlers to kubelet server" Jul 15 05:32:49.761723 kubelet[2737]: I0715 05:32:49.761706 2737 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:32:49.763898 kubelet[2737]: I0715 05:32:49.763130 2737 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:32:49.764274 kubelet[2737]: I0715 05:32:49.763138 2737 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:32:49.764326 kubelet[2737]: E0715 05:32:49.763229 2737 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-155-110\" not found" Jul 15 05:32:49.766184 kubelet[2737]: I0715 05:32:49.766172 2737 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:32:49.767618 kubelet[2737]: I0715 05:32:49.767602 2737 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:32:49.769793 kubelet[2737]: I0715 05:32:49.769766 2737 factory.go:223] Registration of the containerd container factory successfully Jul 15 05:32:49.770053 kubelet[2737]: I0715 05:32:49.769778 2737 factory.go:223] Registration of the systemd container factory successfully Jul 15 05:32:49.770426 kubelet[2737]: E0715 05:32:49.770405 2737 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:32:49.794618 kubelet[2737]: I0715 05:32:49.794588 2737 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 05:32:49.804803 kubelet[2737]: I0715 05:32:49.804752 2737 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 05:32:49.804803 kubelet[2737]: I0715 05:32:49.804768 2737 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 05:32:49.804803 kubelet[2737]: I0715 05:32:49.804782 2737 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:32:49.804964 kubelet[2737]: I0715 05:32:49.804788 2737 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 05:32:49.804964 kubelet[2737]: E0715 05:32:49.804928 2737 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829382 2737 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829394 2737 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829419 2737 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829507 2737 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829514 2737 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829527 2737 policy_none.go:49] "None policy: Start" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829535 2737 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829543 2737 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:32:49.830196 kubelet[2737]: I0715 05:32:49.829605 2737 state_mem.go:75] "Updated machine memory state" Jul 15 05:32:49.834173 kubelet[2737]: E0715 05:32:49.834161 2737 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 05:32:49.834391 kubelet[2737]: I0715 05:32:49.834365 2737 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:32:49.834482 kubelet[2737]: I0715 05:32:49.834435 2737 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:32:49.835055 kubelet[2737]: I0715 05:32:49.834669 2737 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:32:49.837210 kubelet[2737]: E0715 05:32:49.837194 2737 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:32:49.906331 kubelet[2737]: I0715 05:32:49.906297 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:49.906715 kubelet[2737]: I0715 05:32:49.906675 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-155-110" Jul 15 05:32:49.906775 kubelet[2737]: I0715 05:32:49.906699 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:49.914566 kubelet[2737]: E0715 05:32:49.914527 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-155-110\" already exists" pod="kube-system/kube-scheduler-172-237-155-110" Jul 15 05:32:49.936850 kubelet[2737]: I0715 05:32:49.936792 2737 kubelet_node_status.go:75] "Attempting to register node" node="172-237-155-110" Jul 15 05:32:49.946836 kubelet[2737]: I0715 05:32:49.946804 2737 kubelet_node_status.go:124] "Node was previously registered" node="172-237-155-110" Jul 15 05:32:49.946893 kubelet[2737]: I0715 05:32:49.946870 2737 kubelet_node_status.go:78] "Successfully registered node" node="172-237-155-110" Jul 15 05:32:49.969129 kubelet[2737]: I0715 05:32:49.968513 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10343bc0b2f631d4dbccd874b610f69a-kubeconfig\") pod \"kube-scheduler-172-237-155-110\" (UID: \"10343bc0b2f631d4dbccd874b610f69a\") " pod="kube-system/kube-scheduler-172-237-155-110" Jul 15 05:32:49.969129 kubelet[2737]: I0715 05:32:49.968540 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd832213606fa568a48ce36b723104fa-k8s-certs\") pod \"kube-apiserver-172-237-155-110\" (UID: \"cd832213606fa568a48ce36b723104fa\") " pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:49.969129 kubelet[2737]: I0715 05:32:49.968558 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-ca-certs\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:49.969129 kubelet[2737]: I0715 05:32:49.968572 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-kubeconfig\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:49.969129 kubelet[2737]: I0715 05:32:49.968586 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd832213606fa568a48ce36b723104fa-ca-certs\") pod \"kube-apiserver-172-237-155-110\" (UID: \"cd832213606fa568a48ce36b723104fa\") " pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:49.969356 kubelet[2737]: I0715 05:32:49.968618 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd832213606fa568a48ce36b723104fa-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-155-110\" (UID: \"cd832213606fa568a48ce36b723104fa\") " pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:49.969356 kubelet[2737]: I0715 05:32:49.968636 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-flexvolume-dir\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:49.969356 kubelet[2737]: I0715 05:32:49.968651 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-k8s-certs\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:49.969356 kubelet[2737]: I0715 05:32:49.968669 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ad1456f7c5ddbd01581be04da04a260-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-155-110\" (UID: \"3ad1456f7c5ddbd01581be04da04a260\") " pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:50.213418 kubelet[2737]: E0715 05:32:50.213355 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:50.214006 kubelet[2737]: E0715 05:32:50.213776 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:50.215604 kubelet[2737]: E0715 05:32:50.215453 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:50.746219 kubelet[2737]: I0715 05:32:50.746160 2737 apiserver.go:52] "Watching apiserver" Jul 15 05:32:50.765222 kubelet[2737]: I0715 05:32:50.765187 2737 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:32:50.779313 kubelet[2737]: I0715 05:32:50.779107 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-155-110" podStartSLOduration=1.779094851 podStartE2EDuration="1.779094851s" podCreationTimestamp="2025-07-15 05:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:32:50.77856067 +0000 UTC m=+1.104501929" watchObservedRunningTime="2025-07-15 05:32:50.779094851 +0000 UTC m=+1.105036120" Jul 15 05:32:50.797789 kubelet[2737]: I0715 05:32:50.797629 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-155-110" podStartSLOduration=1.7976187000000001 podStartE2EDuration="1.7976187s" podCreationTimestamp="2025-07-15 05:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:32:50.797564124 +0000 UTC m=+1.123505403" watchObservedRunningTime="2025-07-15 05:32:50.7976187 +0000 UTC m=+1.123559969" Jul 15 05:32:50.797789 kubelet[2737]: I0715 05:32:50.797716 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-155-110" podStartSLOduration=2.79771124 podStartE2EDuration="2.79771124s" podCreationTimestamp="2025-07-15 05:32:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:32:50.788500359 +0000 UTC m=+1.114441628" watchObservedRunningTime="2025-07-15 05:32:50.79771124 +0000 UTC m=+1.123652509" Jul 15 05:32:50.819461 kubelet[2737]: I0715 05:32:50.819424 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:50.819649 kubelet[2737]: E0715 05:32:50.819611 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:50.820377 kubelet[2737]: I0715 05:32:50.819860 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:50.826183 kubelet[2737]: E0715 05:32:50.826168 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-155-110\" already exists" pod="kube-system/kube-controller-manager-172-237-155-110" Jul 15 05:32:50.826381 kubelet[2737]: E0715 05:32:50.826368 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:50.826765 kubelet[2737]: E0715 05:32:50.826753 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-155-110\" already exists" pod="kube-system/kube-apiserver-172-237-155-110" Jul 15 05:32:50.826896 kubelet[2737]: E0715 05:32:50.826882 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:51.824906 kubelet[2737]: E0715 05:32:51.821737 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:51.825619 kubelet[2737]: E0715 05:32:51.825572 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:51.825684 kubelet[2737]: E0715 05:32:51.825558 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:52.586265 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 05:32:55.248211 kubelet[2737]: I0715 05:32:55.248139 2737 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:32:55.249521 kubelet[2737]: I0715 05:32:55.248724 2737 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:32:55.249554 containerd[1551]: time="2025-07-15T05:32:55.248519573Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:32:56.215935 systemd[1]: Created slice kubepods-besteffort-poddff29ef0_4d17_466b_81f1_148eaa091e3f.slice - libcontainer container kubepods-besteffort-poddff29ef0_4d17_466b_81f1_148eaa091e3f.slice. Jul 15 05:32:56.307966 kubelet[2737]: I0715 05:32:56.307905 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvg5\" (UniqueName: \"kubernetes.io/projected/dff29ef0-4d17-466b-81f1-148eaa091e3f-kube-api-access-fxvg5\") pod \"kube-proxy-ld2z2\" (UID: \"dff29ef0-4d17-466b-81f1-148eaa091e3f\") " pod="kube-system/kube-proxy-ld2z2" Jul 15 05:32:56.307966 kubelet[2737]: I0715 05:32:56.307950 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dff29ef0-4d17-466b-81f1-148eaa091e3f-xtables-lock\") pod \"kube-proxy-ld2z2\" (UID: \"dff29ef0-4d17-466b-81f1-148eaa091e3f\") " pod="kube-system/kube-proxy-ld2z2" Jul 15 05:32:56.308349 kubelet[2737]: I0715 05:32:56.307977 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dff29ef0-4d17-466b-81f1-148eaa091e3f-lib-modules\") pod \"kube-proxy-ld2z2\" (UID: \"dff29ef0-4d17-466b-81f1-148eaa091e3f\") " pod="kube-system/kube-proxy-ld2z2" Jul 15 05:32:56.308349 kubelet[2737]: I0715 05:32:56.308002 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dff29ef0-4d17-466b-81f1-148eaa091e3f-kube-proxy\") pod \"kube-proxy-ld2z2\" (UID: \"dff29ef0-4d17-466b-81f1-148eaa091e3f\") " pod="kube-system/kube-proxy-ld2z2" Jul 15 05:32:56.339704 kubelet[2737]: E0715 05:32:56.339666 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:56.526006 kubelet[2737]: E0715 05:32:56.525606 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:56.528781 containerd[1551]: time="2025-07-15T05:32:56.527785546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ld2z2,Uid:dff29ef0-4d17-466b-81f1-148eaa091e3f,Namespace:kube-system,Attempt:0,}" Jul 15 05:32:56.550414 systemd[1]: Created slice kubepods-besteffort-pod0c35cdc9_d051_4dd9_8a86_3a9de6fe8b21.slice - libcontainer container kubepods-besteffort-pod0c35cdc9_d051_4dd9_8a86_3a9de6fe8b21.slice. Jul 15 05:32:56.560762 containerd[1551]: time="2025-07-15T05:32:56.560718788Z" level=info msg="connecting to shim 7b79292dbd0acaa84113b9af538de52f589248a70c072244154bc2ddad92f5c4" address="unix:///run/containerd/s/390b8cb2ced442664f7460619973842016776fdf6d93604ce1f415deb43492d9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:32:56.586196 systemd[1]: Started cri-containerd-7b79292dbd0acaa84113b9af538de52f589248a70c072244154bc2ddad92f5c4.scope - libcontainer container 7b79292dbd0acaa84113b9af538de52f589248a70c072244154bc2ddad92f5c4. Jul 15 05:32:56.609096 kubelet[2737]: I0715 05:32:56.608982 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6nm\" (UniqueName: \"kubernetes.io/projected/0c35cdc9-d051-4dd9-8a86-3a9de6fe8b21-kube-api-access-vd6nm\") pod \"tigera-operator-747864d56d-8c4c4\" (UID: \"0c35cdc9-d051-4dd9-8a86-3a9de6fe8b21\") " pod="tigera-operator/tigera-operator-747864d56d-8c4c4" Jul 15 05:32:56.609096 kubelet[2737]: I0715 05:32:56.609016 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c35cdc9-d051-4dd9-8a86-3a9de6fe8b21-var-lib-calico\") pod \"tigera-operator-747864d56d-8c4c4\" (UID: \"0c35cdc9-d051-4dd9-8a86-3a9de6fe8b21\") " pod="tigera-operator/tigera-operator-747864d56d-8c4c4" Jul 15 05:32:56.616358 containerd[1551]: time="2025-07-15T05:32:56.616276750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ld2z2,Uid:dff29ef0-4d17-466b-81f1-148eaa091e3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b79292dbd0acaa84113b9af538de52f589248a70c072244154bc2ddad92f5c4\"" Jul 15 05:32:56.617337 kubelet[2737]: E0715 05:32:56.617320 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:56.623448 containerd[1551]: time="2025-07-15T05:32:56.623372720Z" level=info msg="CreateContainer within sandbox \"7b79292dbd0acaa84113b9af538de52f589248a70c072244154bc2ddad92f5c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:32:56.635626 containerd[1551]: time="2025-07-15T05:32:56.635601293Z" level=info msg="Container 89bf6ec4ca871dd22840df6fa97f0870462d43a54a9becfec579c3666db6aa0f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:32:56.641139 containerd[1551]: time="2025-07-15T05:32:56.641114844Z" level=info msg="CreateContainer within sandbox \"7b79292dbd0acaa84113b9af538de52f589248a70c072244154bc2ddad92f5c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89bf6ec4ca871dd22840df6fa97f0870462d43a54a9becfec579c3666db6aa0f\"" Jul 15 05:32:56.641891 containerd[1551]: time="2025-07-15T05:32:56.641833055Z" level=info msg="StartContainer for \"89bf6ec4ca871dd22840df6fa97f0870462d43a54a9becfec579c3666db6aa0f\"" Jul 15 05:32:56.644278 containerd[1551]: time="2025-07-15T05:32:56.644138212Z" level=info msg="connecting to shim 89bf6ec4ca871dd22840df6fa97f0870462d43a54a9becfec579c3666db6aa0f" address="unix:///run/containerd/s/390b8cb2ced442664f7460619973842016776fdf6d93604ce1f415deb43492d9" protocol=ttrpc version=3 Jul 15 05:32:56.668216 systemd[1]: Started cri-containerd-89bf6ec4ca871dd22840df6fa97f0870462d43a54a9becfec579c3666db6aa0f.scope - libcontainer container 89bf6ec4ca871dd22840df6fa97f0870462d43a54a9becfec579c3666db6aa0f. Jul 15 05:32:56.723495 containerd[1551]: time="2025-07-15T05:32:56.723432890Z" level=info msg="StartContainer for \"89bf6ec4ca871dd22840df6fa97f0870462d43a54a9becfec579c3666db6aa0f\" returns successfully" Jul 15 05:32:56.839578 kubelet[2737]: E0715 05:32:56.838960 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:56.848040 kubelet[2737]: E0715 05:32:56.847940 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:56.861580 containerd[1551]: time="2025-07-15T05:32:56.861533811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8c4c4,Uid:0c35cdc9-d051-4dd9-8a86-3a9de6fe8b21,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:32:56.881769 containerd[1551]: time="2025-07-15T05:32:56.881715394Z" level=info msg="connecting to shim 6ed6b0fb817fcd463acb047a7b7dfcc99da1eb076b7d26411120231c6ed7447e" address="unix:///run/containerd/s/a33c747a4d4766ce374bcfaffcf37dfdf6d08cbbc9c8cddc1c06e3d84e2c70c0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:32:56.920700 systemd[1]: Started cri-containerd-6ed6b0fb817fcd463acb047a7b7dfcc99da1eb076b7d26411120231c6ed7447e.scope - libcontainer container 6ed6b0fb817fcd463acb047a7b7dfcc99da1eb076b7d26411120231c6ed7447e. Jul 15 05:32:56.981752 containerd[1551]: time="2025-07-15T05:32:56.981695127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8c4c4,Uid:0c35cdc9-d051-4dd9-8a86-3a9de6fe8b21,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ed6b0fb817fcd463acb047a7b7dfcc99da1eb076b7d26411120231c6ed7447e\"" Jul 15 05:32:56.984333 containerd[1551]: time="2025-07-15T05:32:56.984066515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:32:57.856338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018888026.mount: Deactivated successfully. Jul 15 05:32:58.521140 containerd[1551]: time="2025-07-15T05:32:58.520381129Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:58.521576 containerd[1551]: time="2025-07-15T05:32:58.521557297Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:32:58.521840 containerd[1551]: time="2025-07-15T05:32:58.521822799Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:58.523431 containerd[1551]: time="2025-07-15T05:32:58.523414591Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:32:58.524039 containerd[1551]: time="2025-07-15T05:32:58.523983725Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.539860546s" Jul 15 05:32:58.524091 containerd[1551]: time="2025-07-15T05:32:58.524040610Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:32:58.527513 containerd[1551]: time="2025-07-15T05:32:58.527469511Z" level=info msg="CreateContainer within sandbox \"6ed6b0fb817fcd463acb047a7b7dfcc99da1eb076b7d26411120231c6ed7447e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:32:58.534160 containerd[1551]: time="2025-07-15T05:32:58.532089927Z" level=info msg="Container ec2998926cb375202b9837884164e4656bad89f1a68a1486b9652eb9f4237820: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:32:58.548915 containerd[1551]: time="2025-07-15T05:32:58.548872344Z" level=info msg="CreateContainer within sandbox \"6ed6b0fb817fcd463acb047a7b7dfcc99da1eb076b7d26411120231c6ed7447e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ec2998926cb375202b9837884164e4656bad89f1a68a1486b9652eb9f4237820\"" Jul 15 05:32:58.549931 containerd[1551]: time="2025-07-15T05:32:58.549898011Z" level=info msg="StartContainer for \"ec2998926cb375202b9837884164e4656bad89f1a68a1486b9652eb9f4237820\"" Jul 15 05:32:58.550962 containerd[1551]: time="2025-07-15T05:32:58.550898374Z" level=info msg="connecting to shim ec2998926cb375202b9837884164e4656bad89f1a68a1486b9652eb9f4237820" address="unix:///run/containerd/s/a33c747a4d4766ce374bcfaffcf37dfdf6d08cbbc9c8cddc1c06e3d84e2c70c0" protocol=ttrpc version=3 Jul 15 05:32:58.579278 systemd[1]: Started cri-containerd-ec2998926cb375202b9837884164e4656bad89f1a68a1486b9652eb9f4237820.scope - libcontainer container ec2998926cb375202b9837884164e4656bad89f1a68a1486b9652eb9f4237820. Jul 15 05:32:58.609187 containerd[1551]: time="2025-07-15T05:32:58.609053234Z" level=info msg="StartContainer for \"ec2998926cb375202b9837884164e4656bad89f1a68a1486b9652eb9f4237820\" returns successfully" Jul 15 05:32:58.856736 kubelet[2737]: I0715 05:32:58.855943 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ld2z2" podStartSLOduration=2.855917549 podStartE2EDuration="2.855917549s" podCreationTimestamp="2025-07-15 05:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:32:56.878306538 +0000 UTC m=+7.204247807" watchObservedRunningTime="2025-07-15 05:32:58.855917549 +0000 UTC m=+9.181858808" Jul 15 05:32:58.947191 kubelet[2737]: E0715 05:32:58.947138 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:58.960561 kubelet[2737]: I0715 05:32:58.960503 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8c4c4" podStartSLOduration=1.418897344 podStartE2EDuration="2.960484948s" podCreationTimestamp="2025-07-15 05:32:56 +0000 UTC" firstStartedPulling="2025-07-15 05:32:56.983553704 +0000 UTC m=+7.309494973" lastFinishedPulling="2025-07-15 05:32:58.525141318 +0000 UTC m=+8.851082577" observedRunningTime="2025-07-15 05:32:58.856440705 +0000 UTC m=+9.182381974" watchObservedRunningTime="2025-07-15 05:32:58.960484948 +0000 UTC m=+9.286426227" Jul 15 05:32:59.466278 kubelet[2737]: E0715 05:32:59.465958 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:59.846836 kubelet[2737]: E0715 05:32:59.846718 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:32:59.847052 kubelet[2737]: E0715 05:32:59.846754 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:04.182085 sudo[1815]: pam_unix(sudo:session): session closed for user root Jul 15 05:33:04.234054 sshd[1814]: Connection closed by 139.178.68.195 port 49102 Jul 15 05:33:04.236375 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jul 15 05:33:04.244745 systemd[1]: sshd@6-172.237.155.110:22-139.178.68.195:49102.service: Deactivated successfully. Jul 15 05:33:04.248408 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:33:04.248710 systemd[1]: session-7.scope: Consumed 4.237s CPU time, 232.2M memory peak. Jul 15 05:33:04.251178 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:33:04.253257 systemd-logind[1538]: Removed session 7. Jul 15 05:33:07.032688 update_engine[1539]: I20250715 05:33:07.032203 1539 update_attempter.cc:509] Updating boot flags... Jul 15 05:33:07.180757 systemd[1]: Created slice kubepods-besteffort-pod63403450_d718_4338_96f1_17926e14386d.slice - libcontainer container kubepods-besteffort-pod63403450_d718_4338_96f1_17926e14386d.slice. Jul 15 05:33:07.185849 kubelet[2737]: I0715 05:33:07.185804 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/63403450-d718-4338-96f1-17926e14386d-typha-certs\") pod \"calico-typha-7df7896c98-gpdln\" (UID: \"63403450-d718-4338-96f1-17926e14386d\") " pod="calico-system/calico-typha-7df7896c98-gpdln" Jul 15 05:33:07.185849 kubelet[2737]: I0715 05:33:07.185841 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63403450-d718-4338-96f1-17926e14386d-tigera-ca-bundle\") pod \"calico-typha-7df7896c98-gpdln\" (UID: \"63403450-d718-4338-96f1-17926e14386d\") " pod="calico-system/calico-typha-7df7896c98-gpdln" Jul 15 05:33:07.186151 kubelet[2737]: I0715 05:33:07.185856 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2f2v\" (UniqueName: \"kubernetes.io/projected/63403450-d718-4338-96f1-17926e14386d-kube-api-access-r2f2v\") pod \"calico-typha-7df7896c98-gpdln\" (UID: \"63403450-d718-4338-96f1-17926e14386d\") " pod="calico-system/calico-typha-7df7896c98-gpdln" Jul 15 05:33:07.408956 systemd[1]: Created slice kubepods-besteffort-podae09ca6f_7c6e_47b0_a764_23bb61e448dc.slice - libcontainer container kubepods-besteffort-podae09ca6f_7c6e_47b0_a764_23bb61e448dc.slice. Jul 15 05:33:07.490203 kubelet[2737]: I0715 05:33:07.487566 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5g9\" (UniqueName: \"kubernetes.io/projected/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-kube-api-access-fm5g9\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490203 kubelet[2737]: I0715 05:33:07.487597 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-policysync\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490203 kubelet[2737]: I0715 05:33:07.487610 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-cni-net-dir\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490203 kubelet[2737]: I0715 05:33:07.487622 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-flexvol-driver-host\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490203 kubelet[2737]: I0715 05:33:07.487635 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-lib-modules\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490374 kubelet[2737]: I0715 05:33:07.487645 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-node-certs\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490374 kubelet[2737]: I0715 05:33:07.487655 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-tigera-ca-bundle\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490374 kubelet[2737]: I0715 05:33:07.487666 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-xtables-lock\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490374 kubelet[2737]: I0715 05:33:07.487682 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-cni-bin-dir\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490374 kubelet[2737]: I0715 05:33:07.487694 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-cni-log-dir\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490462 kubelet[2737]: I0715 05:33:07.487705 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-var-run-calico\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.490462 kubelet[2737]: I0715 05:33:07.487716 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae09ca6f-7c6e-47b0-a764-23bb61e448dc-var-lib-calico\") pod \"calico-node-xwgmt\" (UID: \"ae09ca6f-7c6e-47b0-a764-23bb61e448dc\") " pod="calico-system/calico-node-xwgmt" Jul 15 05:33:07.504216 kubelet[2737]: E0715 05:33:07.503617 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:07.505409 containerd[1551]: time="2025-07-15T05:33:07.505368252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df7896c98-gpdln,Uid:63403450-d718-4338-96f1-17926e14386d,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:07.528916 containerd[1551]: time="2025-07-15T05:33:07.528818148Z" level=info msg="connecting to shim cafe80b53077deb4e2f4ab38d7ddbd9dd3c86454af5eff5023383336823a8d4c" address="unix:///run/containerd/s/5b61203dd675a64caa31869b9ac1c50ea48289c7550d69549c4fdcae872b5f2c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:07.559436 systemd[1]: Started cri-containerd-cafe80b53077deb4e2f4ab38d7ddbd9dd3c86454af5eff5023383336823a8d4c.scope - libcontainer container cafe80b53077deb4e2f4ab38d7ddbd9dd3c86454af5eff5023383336823a8d4c. Jul 15 05:33:07.575464 kubelet[2737]: E0715 05:33:07.575289 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62x4x" podUID="462befc7-9998-48e9-9fe3-be8ad0e74203" Jul 15 05:33:07.587868 kubelet[2737]: I0715 05:33:07.587839 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/462befc7-9998-48e9-9fe3-be8ad0e74203-kubelet-dir\") pod \"csi-node-driver-62x4x\" (UID: \"462befc7-9998-48e9-9fe3-be8ad0e74203\") " pod="calico-system/csi-node-driver-62x4x" Jul 15 05:33:07.587868 kubelet[2737]: I0715 05:33:07.587868 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/462befc7-9998-48e9-9fe3-be8ad0e74203-socket-dir\") pod \"csi-node-driver-62x4x\" (UID: \"462befc7-9998-48e9-9fe3-be8ad0e74203\") " pod="calico-system/csi-node-driver-62x4x" Jul 15 05:33:07.587983 kubelet[2737]: I0715 05:33:07.587950 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/462befc7-9998-48e9-9fe3-be8ad0e74203-registration-dir\") pod \"csi-node-driver-62x4x\" (UID: \"462befc7-9998-48e9-9fe3-be8ad0e74203\") " pod="calico-system/csi-node-driver-62x4x" Jul 15 05:33:07.587983 kubelet[2737]: I0715 05:33:07.587963 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9r2r\" (UniqueName: \"kubernetes.io/projected/462befc7-9998-48e9-9fe3-be8ad0e74203-kube-api-access-c9r2r\") pod \"csi-node-driver-62x4x\" (UID: \"462befc7-9998-48e9-9fe3-be8ad0e74203\") " pod="calico-system/csi-node-driver-62x4x" Jul 15 05:33:07.588025 kubelet[2737]: I0715 05:33:07.587987 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/462befc7-9998-48e9-9fe3-be8ad0e74203-varrun\") pod \"csi-node-driver-62x4x\" (UID: \"462befc7-9998-48e9-9fe3-be8ad0e74203\") " pod="calico-system/csi-node-driver-62x4x" Jul 15 05:33:07.590125 kubelet[2737]: E0715 05:33:07.590100 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.590125 kubelet[2737]: W0715 05:33:07.590119 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.590203 kubelet[2737]: E0715 05:33:07.590132 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.591008 kubelet[2737]: E0715 05:33:07.590984 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.591008 kubelet[2737]: W0715 05:33:07.591002 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.591008 kubelet[2737]: E0715 05:33:07.591010 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.591302 kubelet[2737]: E0715 05:33:07.591281 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.591302 kubelet[2737]: W0715 05:33:07.591295 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.591302 kubelet[2737]: E0715 05:33:07.591303 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.591894 kubelet[2737]: E0715 05:33:07.591872 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.591894 kubelet[2737]: W0715 05:33:07.591887 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.591894 kubelet[2737]: E0715 05:33:07.591894 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.592190 kubelet[2737]: E0715 05:33:07.592163 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.592190 kubelet[2737]: W0715 05:33:07.592177 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.592190 kubelet[2737]: E0715 05:33:07.592184 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.592593 kubelet[2737]: E0715 05:33:07.592554 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.592593 kubelet[2737]: W0715 05:33:07.592586 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.592649 kubelet[2737]: E0715 05:33:07.592626 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.592912 kubelet[2737]: E0715 05:33:07.592881 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.592912 kubelet[2737]: W0715 05:33:07.592894 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.592912 kubelet[2737]: E0715 05:33:07.592901 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.593187 kubelet[2737]: E0715 05:33:07.593161 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.593187 kubelet[2737]: W0715 05:33:07.593174 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.593187 kubelet[2737]: E0715 05:33:07.593181 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.593436 kubelet[2737]: E0715 05:33:07.593417 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.593436 kubelet[2737]: W0715 05:33:07.593431 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.593502 kubelet[2737]: E0715 05:33:07.593437 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.593700 kubelet[2737]: E0715 05:33:07.593668 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.593730 kubelet[2737]: W0715 05:33:07.593683 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.593730 kubelet[2737]: E0715 05:33:07.593716 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.593948 kubelet[2737]: E0715 05:33:07.593903 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.593948 kubelet[2737]: W0715 05:33:07.593917 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.593948 kubelet[2737]: E0715 05:33:07.593923 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.594155 kubelet[2737]: E0715 05:33:07.594130 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.594155 kubelet[2737]: W0715 05:33:07.594143 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.594155 kubelet[2737]: E0715 05:33:07.594149 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.594399 kubelet[2737]: E0715 05:33:07.594365 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.594399 kubelet[2737]: W0715 05:33:07.594376 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.594399 kubelet[2737]: E0715 05:33:07.594383 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.594675 kubelet[2737]: E0715 05:33:07.594654 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.594716 kubelet[2737]: W0715 05:33:07.594692 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.594716 kubelet[2737]: E0715 05:33:07.594700 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.594920 kubelet[2737]: E0715 05:33:07.594889 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.594957 kubelet[2737]: W0715 05:33:07.594942 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.594957 kubelet[2737]: E0715 05:33:07.594950 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.595233 kubelet[2737]: E0715 05:33:07.595213 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.595233 kubelet[2737]: W0715 05:33:07.595226 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.595233 kubelet[2737]: E0715 05:33:07.595232 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.595613 kubelet[2737]: E0715 05:33:07.595590 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.595613 kubelet[2737]: W0715 05:33:07.595602 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.595613 kubelet[2737]: E0715 05:33:07.595610 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.595897 kubelet[2737]: E0715 05:33:07.595882 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.595897 kubelet[2737]: W0715 05:33:07.595894 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.595897 kubelet[2737]: E0715 05:33:07.595900 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.596518 kubelet[2737]: E0715 05:33:07.596495 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.596518 kubelet[2737]: W0715 05:33:07.596510 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.596518 kubelet[2737]: E0715 05:33:07.596517 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.596664 kubelet[2737]: E0715 05:33:07.596642 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.596690 kubelet[2737]: W0715 05:33:07.596655 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.596690 kubelet[2737]: E0715 05:33:07.596686 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.596910 kubelet[2737]: E0715 05:33:07.596878 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.596910 kubelet[2737]: W0715 05:33:07.596891 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.596910 kubelet[2737]: E0715 05:33:07.596897 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.597198 kubelet[2737]: E0715 05:33:07.597178 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.597198 kubelet[2737]: W0715 05:33:07.597191 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.597198 kubelet[2737]: E0715 05:33:07.597198 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.597530 kubelet[2737]: E0715 05:33:07.597472 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.597530 kubelet[2737]: W0715 05:33:07.597483 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.597530 kubelet[2737]: E0715 05:33:07.597489 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.597892 kubelet[2737]: E0715 05:33:07.597830 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.597892 kubelet[2737]: W0715 05:33:07.597841 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.597892 kubelet[2737]: E0715 05:33:07.597848 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.598223 kubelet[2737]: E0715 05:33:07.598199 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.598265 kubelet[2737]: W0715 05:33:07.598235 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.598265 kubelet[2737]: E0715 05:33:07.598243 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.598608 kubelet[2737]: E0715 05:33:07.598555 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.598608 kubelet[2737]: W0715 05:33:07.598566 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.598608 kubelet[2737]: E0715 05:33:07.598607 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.599019 kubelet[2737]: E0715 05:33:07.598995 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.599019 kubelet[2737]: W0715 05:33:07.599010 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.599019 kubelet[2737]: E0715 05:33:07.599017 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.599497 kubelet[2737]: E0715 05:33:07.599477 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.599497 kubelet[2737]: W0715 05:33:07.599491 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.599497 kubelet[2737]: E0715 05:33:07.599499 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.599948 kubelet[2737]: E0715 05:33:07.599658 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.599948 kubelet[2737]: W0715 05:33:07.599664 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.599948 kubelet[2737]: E0715 05:33:07.599670 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.600005 kubelet[2737]: E0715 05:33:07.599961 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.600005 kubelet[2737]: W0715 05:33:07.599968 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.600005 kubelet[2737]: E0715 05:33:07.599975 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.600281 kubelet[2737]: E0715 05:33:07.600261 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.600281 kubelet[2737]: W0715 05:33:07.600275 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.600281 kubelet[2737]: E0715 05:33:07.600282 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.601174 kubelet[2737]: E0715 05:33:07.601153 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.601174 kubelet[2737]: W0715 05:33:07.601169 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.601174 kubelet[2737]: E0715 05:33:07.601176 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.601341 kubelet[2737]: E0715 05:33:07.601321 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.601341 kubelet[2737]: W0715 05:33:07.601335 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.601341 kubelet[2737]: E0715 05:33:07.601342 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.601540 kubelet[2737]: E0715 05:33:07.601522 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.601540 kubelet[2737]: W0715 05:33:07.601535 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.601540 kubelet[2737]: E0715 05:33:07.601541 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.602246 kubelet[2737]: E0715 05:33:07.602225 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.602246 kubelet[2737]: W0715 05:33:07.602240 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.602246 kubelet[2737]: E0715 05:33:07.602247 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.602529 kubelet[2737]: E0715 05:33:07.602509 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.602529 kubelet[2737]: W0715 05:33:07.602525 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.602575 kubelet[2737]: E0715 05:33:07.602532 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.602948 kubelet[2737]: E0715 05:33:07.602928 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.602948 kubelet[2737]: W0715 05:33:07.602942 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.602948 kubelet[2737]: E0715 05:33:07.602949 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.604146 kubelet[2737]: E0715 05:33:07.604125 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.604146 kubelet[2737]: W0715 05:33:07.604140 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.604146 kubelet[2737]: E0715 05:33:07.604147 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.604393 kubelet[2737]: E0715 05:33:07.604375 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.604393 kubelet[2737]: W0715 05:33:07.604389 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.604393 kubelet[2737]: E0715 05:33:07.604395 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.604917 kubelet[2737]: E0715 05:33:07.604897 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.604917 kubelet[2737]: W0715 05:33:07.604911 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.604917 kubelet[2737]: E0715 05:33:07.604918 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.605323 kubelet[2737]: E0715 05:33:07.605305 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.605323 kubelet[2737]: W0715 05:33:07.605319 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.605367 kubelet[2737]: E0715 05:33:07.605326 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.605590 kubelet[2737]: E0715 05:33:07.605538 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.605590 kubelet[2737]: W0715 05:33:07.605548 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.605590 kubelet[2737]: E0715 05:33:07.605556 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.613413 kubelet[2737]: E0715 05:33:07.613327 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.613487 kubelet[2737]: W0715 05:33:07.613469 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.614374 kubelet[2737]: E0715 05:33:07.613601 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.618883 kubelet[2737]: E0715 05:33:07.618679 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.618936 kubelet[2737]: W0715 05:33:07.618926 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.618991 kubelet[2737]: E0715 05:33:07.618981 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.638137 containerd[1551]: time="2025-07-15T05:33:07.638111954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df7896c98-gpdln,Uid:63403450-d718-4338-96f1-17926e14386d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cafe80b53077deb4e2f4ab38d7ddbd9dd3c86454af5eff5023383336823a8d4c\"" Jul 15 05:33:07.638754 kubelet[2737]: E0715 05:33:07.638740 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:07.641069 containerd[1551]: time="2025-07-15T05:33:07.641054012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:33:07.688535 kubelet[2737]: E0715 05:33:07.688423 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.688535 kubelet[2737]: W0715 05:33:07.688444 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.688535 kubelet[2737]: E0715 05:33:07.688464 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.689615 kubelet[2737]: E0715 05:33:07.689596 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.689693 kubelet[2737]: W0715 05:33:07.689610 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.689693 kubelet[2737]: E0715 05:33:07.689673 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.690163 kubelet[2737]: E0715 05:33:07.690141 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.690296 kubelet[2737]: W0715 05:33:07.690165 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.690296 kubelet[2737]: E0715 05:33:07.690173 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.690735 kubelet[2737]: E0715 05:33:07.690576 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.690735 kubelet[2737]: W0715 05:33:07.690587 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.690735 kubelet[2737]: E0715 05:33:07.690593 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.691145 kubelet[2737]: E0715 05:33:07.691104 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.691145 kubelet[2737]: W0715 05:33:07.691115 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.691145 kubelet[2737]: E0715 05:33:07.691121 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.691702 kubelet[2737]: E0715 05:33:07.691667 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.691828 kubelet[2737]: W0715 05:33:07.691678 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.691828 kubelet[2737]: E0715 05:33:07.691780 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.692556 kubelet[2737]: E0715 05:33:07.692540 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.692556 kubelet[2737]: W0715 05:33:07.692551 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.692708 kubelet[2737]: E0715 05:33:07.692559 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.693840 kubelet[2737]: E0715 05:33:07.693825 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.693840 kubelet[2737]: W0715 05:33:07.693836 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.693918 kubelet[2737]: E0715 05:33:07.693846 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.694048 kubelet[2737]: E0715 05:33:07.694020 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.694048 kubelet[2737]: W0715 05:33:07.694030 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.694048 kubelet[2737]: E0715 05:33:07.694036 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.694305 kubelet[2737]: E0715 05:33:07.694289 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.694305 kubelet[2737]: W0715 05:33:07.694301 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.694378 kubelet[2737]: E0715 05:33:07.694310 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.694696 kubelet[2737]: E0715 05:33:07.694651 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.694696 kubelet[2737]: W0715 05:33:07.694666 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.694696 kubelet[2737]: E0715 05:33:07.694673 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.694865 kubelet[2737]: E0715 05:33:07.694836 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.694865 kubelet[2737]: W0715 05:33:07.694842 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.694865 kubelet[2737]: E0715 05:33:07.694849 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.695219 kubelet[2737]: E0715 05:33:07.695135 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.695219 kubelet[2737]: W0715 05:33:07.695146 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.695219 kubelet[2737]: E0715 05:33:07.695153 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.695729 kubelet[2737]: E0715 05:33:07.695322 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.695729 kubelet[2737]: W0715 05:33:07.695332 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.695729 kubelet[2737]: E0715 05:33:07.695338 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.695729 kubelet[2737]: E0715 05:33:07.695489 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.695729 kubelet[2737]: W0715 05:33:07.695495 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.695729 kubelet[2737]: E0715 05:33:07.695501 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.695729 kubelet[2737]: E0715 05:33:07.695654 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.695729 kubelet[2737]: W0715 05:33:07.695659 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.695729 kubelet[2737]: E0715 05:33:07.695665 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.695894 kubelet[2737]: E0715 05:33:07.695812 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.695894 kubelet[2737]: W0715 05:33:07.695819 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.695894 kubelet[2737]: E0715 05:33:07.695824 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.695989 kubelet[2737]: E0715 05:33:07.695975 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.695989 kubelet[2737]: W0715 05:33:07.695984 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.695989 kubelet[2737]: E0715 05:33:07.695990 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.696269 kubelet[2737]: E0715 05:33:07.696254 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.696269 kubelet[2737]: W0715 05:33:07.696264 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.696269 kubelet[2737]: E0715 05:33:07.696272 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.696820 kubelet[2737]: E0715 05:33:07.696801 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.696820 kubelet[2737]: W0715 05:33:07.696815 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.696820 kubelet[2737]: E0715 05:33:07.696823 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.697005 kubelet[2737]: E0715 05:33:07.696983 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.697005 kubelet[2737]: W0715 05:33:07.696997 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.697005 kubelet[2737]: E0715 05:33:07.697004 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.697369 kubelet[2737]: E0715 05:33:07.697191 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.697369 kubelet[2737]: W0715 05:33:07.697197 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.697369 kubelet[2737]: E0715 05:33:07.697203 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.697369 kubelet[2737]: E0715 05:33:07.697368 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.697369 kubelet[2737]: W0715 05:33:07.697374 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.697489 kubelet[2737]: E0715 05:33:07.697381 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.697685 kubelet[2737]: E0715 05:33:07.697622 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.697685 kubelet[2737]: W0715 05:33:07.697633 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.697685 kubelet[2737]: E0715 05:33:07.697639 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.697836 kubelet[2737]: E0715 05:33:07.697814 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.697836 kubelet[2737]: W0715 05:33:07.697827 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.697836 kubelet[2737]: E0715 05:33:07.697833 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.704832 kubelet[2737]: E0715 05:33:07.704817 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:33:07.704832 kubelet[2737]: W0715 05:33:07.704828 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:33:07.704931 kubelet[2737]: E0715 05:33:07.704837 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:33:07.712356 containerd[1551]: time="2025-07-15T05:33:07.712305613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xwgmt,Uid:ae09ca6f-7c6e-47b0-a764-23bb61e448dc,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:07.726548 containerd[1551]: time="2025-07-15T05:33:07.726513505Z" level=info msg="connecting to shim 7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2" address="unix:///run/containerd/s/a8924df1e9cc3946ce8bce0b70eff0fcb837c991e94e6f252ec8537f0b19ebb2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:07.748235 systemd[1]: Started cri-containerd-7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2.scope - libcontainer container 7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2. Jul 15 05:33:07.811383 containerd[1551]: time="2025-07-15T05:33:07.811338090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xwgmt,Uid:ae09ca6f-7c6e-47b0-a764-23bb61e448dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2\"" Jul 15 05:33:08.806828 kubelet[2737]: E0715 05:33:08.806777 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62x4x" podUID="462befc7-9998-48e9-9fe3-be8ad0e74203" Jul 15 05:33:09.087522 containerd[1551]: time="2025-07-15T05:33:09.087327442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:09.088107 containerd[1551]: time="2025-07-15T05:33:09.087785024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 05:33:09.089170 containerd[1551]: time="2025-07-15T05:33:09.088203711Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:09.090151 containerd[1551]: time="2025-07-15T05:33:09.090125189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:09.090835 containerd[1551]: time="2025-07-15T05:33:09.090796535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.44954952s" Jul 15 05:33:09.090889 containerd[1551]: time="2025-07-15T05:33:09.090877364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:33:09.093275 containerd[1551]: time="2025-07-15T05:33:09.093241796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:33:09.108784 containerd[1551]: time="2025-07-15T05:33:09.108745242Z" level=info msg="CreateContainer within sandbox \"cafe80b53077deb4e2f4ab38d7ddbd9dd3c86454af5eff5023383336823a8d4c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:33:09.116890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816801409.mount: Deactivated successfully. Jul 15 05:33:09.118140 containerd[1551]: time="2025-07-15T05:33:09.117998326Z" level=info msg="Container 44daf1d9d49a6be6cde1fb381b3d8b1a8f4c8b5ad6a35785c954be2d8dba62d1: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:09.122395 containerd[1551]: time="2025-07-15T05:33:09.122372744Z" level=info msg="CreateContainer within sandbox \"cafe80b53077deb4e2f4ab38d7ddbd9dd3c86454af5eff5023383336823a8d4c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"44daf1d9d49a6be6cde1fb381b3d8b1a8f4c8b5ad6a35785c954be2d8dba62d1\"" Jul 15 05:33:09.122950 containerd[1551]: time="2025-07-15T05:33:09.122916396Z" level=info msg="StartContainer for \"44daf1d9d49a6be6cde1fb381b3d8b1a8f4c8b5ad6a35785c954be2d8dba62d1\"" Jul 15 05:33:09.124015 containerd[1551]: time="2025-07-15T05:33:09.123943326Z" level=info msg="connecting to shim 44daf1d9d49a6be6cde1fb381b3d8b1a8f4c8b5ad6a35785c954be2d8dba62d1" address="unix:///run/containerd/s/5b61203dd675a64caa31869b9ac1c50ea48289c7550d69549c4fdcae872b5f2c" protocol=ttrpc version=3 Jul 15 05:33:09.143185 systemd[1]: Started cri-containerd-44daf1d9d49a6be6cde1fb381b3d8b1a8f4c8b5ad6a35785c954be2d8dba62d1.scope - libcontainer container 44daf1d9d49a6be6cde1fb381b3d8b1a8f4c8b5ad6a35785c954be2d8dba62d1. Jul 15 05:33:09.182685 containerd[1551]: time="2025-07-15T05:33:09.182652586Z" level=info msg="StartContainer for \"44daf1d9d49a6be6cde1fb381b3d8b1a8f4c8b5ad6a35785c954be2d8dba62d1\" returns successfully" Jul 15 05:33:09.740792 containerd[1551]: time="2025-07-15T05:33:09.740336651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:09.741518 containerd[1551]: time="2025-07-15T05:33:09.741503414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 05:33:09.742297 containerd[1551]: time="2025-07-15T05:33:09.742281006Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:09.743920 containerd[1551]: time="2025-07-15T05:33:09.743552186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:09.744100 containerd[1551]: time="2025-07-15T05:33:09.744050083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 650.78186ms" Jul 15 05:33:09.744131 containerd[1551]: time="2025-07-15T05:33:09.744108646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:33:09.748838 containerd[1551]: time="2025-07-15T05:33:09.748818162Z" level=info msg="CreateContainer within sandbox \"7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:33:09.760918 containerd[1551]: time="2025-07-15T05:33:09.759480598Z" level=info msg="Container 7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:09.766710 containerd[1551]: time="2025-07-15T05:33:09.766654424Z" level=info msg="CreateContainer within sandbox \"7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88\"" Jul 15 05:33:09.767948 containerd[1551]: time="2025-07-15T05:33:09.767737637Z" level=info msg="StartContainer for \"7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88\"" Jul 15 05:33:09.770025 containerd[1551]: time="2025-07-15T05:33:09.769993253Z" level=info msg="connecting to shim 7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88" address="unix:///run/containerd/s/a8924df1e9cc3946ce8bce0b70eff0fcb837c991e94e6f252ec8537f0b19ebb2" protocol=ttrpc version=3 Jul 15 05:33:09.791356 systemd[1]: Started cri-containerd-7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88.scope - libcontainer container 7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88. Jul 15 05:33:09.827873 containerd[1551]: time="2025-07-15T05:33:09.827800407Z" level=info msg="StartContainer for \"7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88\" returns successfully" Jul 15 05:33:09.844117 systemd[1]: cri-containerd-7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88.scope: Deactivated successfully. Jul 15 05:33:09.846418 containerd[1551]: time="2025-07-15T05:33:09.846385414Z" level=info msg="received exit event container_id:\"7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88\" id:\"7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88\" pid:3456 exited_at:{seconds:1752557589 nanos:846150823}" Jul 15 05:33:09.846830 containerd[1551]: time="2025-07-15T05:33:09.846791763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88\" id:\"7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88\" pid:3456 exited_at:{seconds:1752557589 nanos:846150823}" Jul 15 05:33:09.869161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dff791794bef437f0092dec08bea68cf97b952d3e6c31ac5de00d604a670c88-rootfs.mount: Deactivated successfully. Jul 15 05:33:09.893174 kubelet[2737]: E0715 05:33:09.893098 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:10.805993 kubelet[2737]: E0715 05:33:10.805932 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62x4x" podUID="462befc7-9998-48e9-9fe3-be8ad0e74203" Jul 15 05:33:10.895627 kubelet[2737]: I0715 05:33:10.895593 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:33:10.895953 kubelet[2737]: E0715 05:33:10.895828 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:10.899706 containerd[1551]: time="2025-07-15T05:33:10.899654031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:33:10.914195 kubelet[2737]: I0715 05:33:10.914157 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7df7896c98-gpdln" podStartSLOduration=2.46260981 podStartE2EDuration="3.914147209s" podCreationTimestamp="2025-07-15 05:33:07 +0000 UTC" firstStartedPulling="2025-07-15 05:33:07.640656379 +0000 UTC m=+17.966597638" lastFinishedPulling="2025-07-15 05:33:09.092193778 +0000 UTC m=+19.418135037" observedRunningTime="2025-07-15 05:33:09.945735851 +0000 UTC m=+20.271677120" watchObservedRunningTime="2025-07-15 05:33:10.914147209 +0000 UTC m=+21.240088478" Jul 15 05:33:12.465665 containerd[1551]: time="2025-07-15T05:33:12.465619956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:12.466531 containerd[1551]: time="2025-07-15T05:33:12.466330363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:33:12.467033 containerd[1551]: time="2025-07-15T05:33:12.467002693Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:12.468528 containerd[1551]: time="2025-07-15T05:33:12.468497008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:12.469198 containerd[1551]: time="2025-07-15T05:33:12.469173007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.569477631s" Jul 15 05:33:12.469264 containerd[1551]: time="2025-07-15T05:33:12.469250929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:33:12.472401 containerd[1551]: time="2025-07-15T05:33:12.472369425Z" level=info msg="CreateContainer within sandbox \"7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:33:12.479918 containerd[1551]: time="2025-07-15T05:33:12.479209315Z" level=info msg="Container 7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:12.495521 containerd[1551]: time="2025-07-15T05:33:12.495491754Z" level=info msg="CreateContainer within sandbox \"7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be\"" Jul 15 05:33:12.496090 containerd[1551]: time="2025-07-15T05:33:12.496048156Z" level=info msg="StartContainer for \"7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be\"" Jul 15 05:33:12.498863 containerd[1551]: time="2025-07-15T05:33:12.498709640Z" level=info msg="connecting to shim 7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be" address="unix:///run/containerd/s/a8924df1e9cc3946ce8bce0b70eff0fcb837c991e94e6f252ec8537f0b19ebb2" protocol=ttrpc version=3 Jul 15 05:33:12.528183 systemd[1]: Started cri-containerd-7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be.scope - libcontainer container 7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be. Jul 15 05:33:12.567089 containerd[1551]: time="2025-07-15T05:33:12.567019365Z" level=info msg="StartContainer for \"7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be\" returns successfully" Jul 15 05:33:12.806423 kubelet[2737]: E0715 05:33:12.805861 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-62x4x" podUID="462befc7-9998-48e9-9fe3-be8ad0e74203" Jul 15 05:33:12.982057 containerd[1551]: time="2025-07-15T05:33:12.982003236Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:33:12.985344 systemd[1]: cri-containerd-7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be.scope: Deactivated successfully. Jul 15 05:33:12.986122 systemd[1]: cri-containerd-7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be.scope: Consumed 441ms CPU time, 204M memory peak, 171.2M written to disk. Jul 15 05:33:12.987042 containerd[1551]: time="2025-07-15T05:33:12.987018415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be\" id:\"7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be\" pid:3515 exited_at:{seconds:1752557592 nanos:986682090}" Jul 15 05:33:12.987222 containerd[1551]: time="2025-07-15T05:33:12.987199226Z" level=info msg="received exit event container_id:\"7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be\" id:\"7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be\" pid:3515 exited_at:{seconds:1752557592 nanos:986682090}" Jul 15 05:33:13.005639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a2d420ac514404024d0415cc4f21ee68f23d31c30b4f88d11670ab8ef2d04be-rootfs.mount: Deactivated successfully. Jul 15 05:33:13.035962 kubelet[2737]: I0715 05:33:13.034616 2737 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 05:33:13.079723 systemd[1]: Created slice kubepods-burstable-pod110a4b1b_441e_4629_b994_a1d817c572c6.slice - libcontainer container kubepods-burstable-pod110a4b1b_441e_4629_b994_a1d817c572c6.slice. Jul 15 05:33:13.089063 systemd[1]: Created slice kubepods-burstable-pod71c71ad4_1367_46ee_b402_3c8bcaa7064a.slice - libcontainer container kubepods-burstable-pod71c71ad4_1367_46ee_b402_3c8bcaa7064a.slice. Jul 15 05:33:13.109258 systemd[1]: Created slice kubepods-besteffort-pod6765a94e_ca61_470d_b923_4780beee4dfa.slice - libcontainer container kubepods-besteffort-pod6765a94e_ca61_470d_b923_4780beee4dfa.slice. Jul 15 05:33:13.119547 systemd[1]: Created slice kubepods-besteffort-podefe7ad1c_d3cd_4848_b56b_05835e9bdae6.slice - libcontainer container kubepods-besteffort-podefe7ad1c_d3cd_4848_b56b_05835e9bdae6.slice. Jul 15 05:33:13.132151 systemd[1]: Created slice kubepods-besteffort-podbd15ad43_a0fe_48b6_a4a4_a4c5f213d373.slice - libcontainer container kubepods-besteffort-podbd15ad43_a0fe_48b6_a4a4_a4c5f213d373.slice. Jul 15 05:33:13.135130 kubelet[2737]: I0715 05:33:13.135059 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btklm\" (UniqueName: \"kubernetes.io/projected/31c4da21-35cd-49df-bc2f-abe8b087a31b-kube-api-access-btklm\") pod \"calico-apiserver-57b4fb66-qrqgk\" (UID: \"31c4da21-35cd-49df-bc2f-abe8b087a31b\") " pod="calico-apiserver/calico-apiserver-57b4fb66-qrqgk" Jul 15 05:33:13.135192 kubelet[2737]: I0715 05:33:13.135130 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw4s6\" (UniqueName: \"kubernetes.io/projected/6765a94e-ca61-470d-b923-4780beee4dfa-kube-api-access-cw4s6\") pod \"calico-kube-controllers-85ccff6498-v45rk\" (UID: \"6765a94e-ca61-470d-b923-4780beee4dfa\") " pod="calico-system/calico-kube-controllers-85ccff6498-v45rk" Jul 15 05:33:13.135192 kubelet[2737]: I0715 05:33:13.135151 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-ca-bundle\") pod \"whisker-57fb9cb5f4-9fnr2\" (UID: \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\") " pod="calico-system/whisker-57fb9cb5f4-9fnr2" Jul 15 05:33:13.135192 kubelet[2737]: I0715 05:33:13.135164 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpbl\" (UniqueName: \"kubernetes.io/projected/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-kube-api-access-9qpbl\") pod \"whisker-57fb9cb5f4-9fnr2\" (UID: \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\") " pod="calico-system/whisker-57fb9cb5f4-9fnr2" Jul 15 05:33:13.135192 kubelet[2737]: I0715 05:33:13.135181 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlc8r\" (UniqueName: \"kubernetes.io/projected/744412fe-caf0-40c9-a293-a02870e7919d-kube-api-access-hlc8r\") pod \"calico-apiserver-57b4fb66-zlsp9\" (UID: \"744412fe-caf0-40c9-a293-a02870e7919d\") " pod="calico-apiserver/calico-apiserver-57b4fb66-zlsp9" Jul 15 05:33:13.135391 kubelet[2737]: I0715 05:33:13.135195 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bd15ad43-a0fe-48b6-a4a4-a4c5f213d373-goldmane-key-pair\") pod \"goldmane-768f4c5c69-vpk4j\" (UID: \"bd15ad43-a0fe-48b6-a4a4-a4c5f213d373\") " pod="calico-system/goldmane-768f4c5c69-vpk4j" Jul 15 05:33:13.135391 kubelet[2737]: I0715 05:33:13.135206 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-backend-key-pair\") pod \"whisker-57fb9cb5f4-9fnr2\" (UID: \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\") " pod="calico-system/whisker-57fb9cb5f4-9fnr2" Jul 15 05:33:13.135391 kubelet[2737]: I0715 05:33:13.135221 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/110a4b1b-441e-4629-b994-a1d817c572c6-config-volume\") pod \"coredns-674b8bbfcf-w4rqj\" (UID: \"110a4b1b-441e-4629-b994-a1d817c572c6\") " pod="kube-system/coredns-674b8bbfcf-w4rqj" Jul 15 05:33:13.135391 kubelet[2737]: I0715 05:33:13.135237 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd15ad43-a0fe-48b6-a4a4-a4c5f213d373-config\") pod \"goldmane-768f4c5c69-vpk4j\" (UID: \"bd15ad43-a0fe-48b6-a4a4-a4c5f213d373\") " pod="calico-system/goldmane-768f4c5c69-vpk4j" Jul 15 05:33:13.135391 kubelet[2737]: I0715 05:33:13.135250 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8d9\" (UniqueName: \"kubernetes.io/projected/71c71ad4-1367-46ee-b402-3c8bcaa7064a-kube-api-access-nz8d9\") pod \"coredns-674b8bbfcf-cpztt\" (UID: \"71c71ad4-1367-46ee-b402-3c8bcaa7064a\") " pod="kube-system/coredns-674b8bbfcf-cpztt" Jul 15 05:33:13.135483 kubelet[2737]: I0715 05:33:13.135263 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd15ad43-a0fe-48b6-a4a4-a4c5f213d373-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-vpk4j\" (UID: \"bd15ad43-a0fe-48b6-a4a4-a4c5f213d373\") " pod="calico-system/goldmane-768f4c5c69-vpk4j" Jul 15 05:33:13.135483 kubelet[2737]: I0715 05:33:13.135278 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/744412fe-caf0-40c9-a293-a02870e7919d-calico-apiserver-certs\") pod \"calico-apiserver-57b4fb66-zlsp9\" (UID: \"744412fe-caf0-40c9-a293-a02870e7919d\") " pod="calico-apiserver/calico-apiserver-57b4fb66-zlsp9" Jul 15 05:33:13.135483 kubelet[2737]: I0715 05:33:13.135293 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4bm\" (UniqueName: \"kubernetes.io/projected/bd15ad43-a0fe-48b6-a4a4-a4c5f213d373-kube-api-access-zk4bm\") pod \"goldmane-768f4c5c69-vpk4j\" (UID: \"bd15ad43-a0fe-48b6-a4a4-a4c5f213d373\") " pod="calico-system/goldmane-768f4c5c69-vpk4j" Jul 15 05:33:13.135483 kubelet[2737]: I0715 05:33:13.135311 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwzc\" (UniqueName: \"kubernetes.io/projected/110a4b1b-441e-4629-b994-a1d817c572c6-kube-api-access-czwzc\") pod \"coredns-674b8bbfcf-w4rqj\" (UID: \"110a4b1b-441e-4629-b994-a1d817c572c6\") " pod="kube-system/coredns-674b8bbfcf-w4rqj" Jul 15 05:33:13.135483 kubelet[2737]: I0715 05:33:13.135324 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31c4da21-35cd-49df-bc2f-abe8b087a31b-calico-apiserver-certs\") pod \"calico-apiserver-57b4fb66-qrqgk\" (UID: \"31c4da21-35cd-49df-bc2f-abe8b087a31b\") " pod="calico-apiserver/calico-apiserver-57b4fb66-qrqgk" Jul 15 05:33:13.135574 kubelet[2737]: I0715 05:33:13.135337 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71c71ad4-1367-46ee-b402-3c8bcaa7064a-config-volume\") pod \"coredns-674b8bbfcf-cpztt\" (UID: \"71c71ad4-1367-46ee-b402-3c8bcaa7064a\") " pod="kube-system/coredns-674b8bbfcf-cpztt" Jul 15 05:33:13.135574 kubelet[2737]: I0715 05:33:13.135362 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6765a94e-ca61-470d-b923-4780beee4dfa-tigera-ca-bundle\") pod \"calico-kube-controllers-85ccff6498-v45rk\" (UID: \"6765a94e-ca61-470d-b923-4780beee4dfa\") " pod="calico-system/calico-kube-controllers-85ccff6498-v45rk" Jul 15 05:33:13.141792 systemd[1]: Created slice kubepods-besteffort-pod31c4da21_35cd_49df_bc2f_abe8b087a31b.slice - libcontainer container kubepods-besteffort-pod31c4da21_35cd_49df_bc2f_abe8b087a31b.slice. Jul 15 05:33:13.149255 systemd[1]: Created slice kubepods-besteffort-pod744412fe_caf0_40c9_a293_a02870e7919d.slice - libcontainer container kubepods-besteffort-pod744412fe_caf0_40c9_a293_a02870e7919d.slice. Jul 15 05:33:13.386973 kubelet[2737]: E0715 05:33:13.386934 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:13.388027 containerd[1551]: time="2025-07-15T05:33:13.387997972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w4rqj,Uid:110a4b1b-441e-4629-b994-a1d817c572c6,Namespace:kube-system,Attempt:0,}" Jul 15 05:33:13.395990 kubelet[2737]: E0715 05:33:13.395972 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:13.400606 containerd[1551]: time="2025-07-15T05:33:13.400548610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpztt,Uid:71c71ad4-1367-46ee-b402-3c8bcaa7064a,Namespace:kube-system,Attempt:0,}" Jul 15 05:33:13.415741 containerd[1551]: time="2025-07-15T05:33:13.415714723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85ccff6498-v45rk,Uid:6765a94e-ca61-470d-b923-4780beee4dfa,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:13.435551 containerd[1551]: time="2025-07-15T05:33:13.435361550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb9cb5f4-9fnr2,Uid:efe7ad1c-d3cd-4848-b56b-05835e9bdae6,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:13.443443 containerd[1551]: time="2025-07-15T05:33:13.443427385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vpk4j,Uid:bd15ad43-a0fe-48b6-a4a4-a4c5f213d373,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:13.446652 containerd[1551]: time="2025-07-15T05:33:13.446631233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-qrqgk,Uid:31c4da21-35cd-49df-bc2f-abe8b087a31b,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:33:13.453199 containerd[1551]: time="2025-07-15T05:33:13.453170716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-zlsp9,Uid:744412fe-caf0-40c9-a293-a02870e7919d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:33:13.530023 containerd[1551]: time="2025-07-15T05:33:13.529984507Z" level=error msg="Failed to destroy network for sandbox \"2ad6ca6fc257076bf442fadeb9c182416a27c59f91c56961f620baaf48787cce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.533756 systemd[1]: run-netns-cni\x2d92db0ad8\x2d6c75\x2d61c7\x2d5d89\x2d5106a5b083dd.mount: Deactivated successfully. Jul 15 05:33:13.534879 containerd[1551]: time="2025-07-15T05:33:13.534752853Z" level=error msg="Failed to destroy network for sandbox \"8412946cbfbc176020bbf29b641afcfa7e4661847cd37c275209e49635231f68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.538346 systemd[1]: run-netns-cni\x2dd19487b7\x2df4dd\x2dfe63\x2d0fef\x2dbe8b2dda427f.mount: Deactivated successfully. Jul 15 05:33:13.540477 containerd[1551]: time="2025-07-15T05:33:13.540402933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w4rqj,Uid:110a4b1b-441e-4629-b994-a1d817c572c6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ad6ca6fc257076bf442fadeb9c182416a27c59f91c56961f620baaf48787cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.545737 containerd[1551]: time="2025-07-15T05:33:13.545386457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpztt,Uid:71c71ad4-1367-46ee-b402-3c8bcaa7064a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8412946cbfbc176020bbf29b641afcfa7e4661847cd37c275209e49635231f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.545844 kubelet[2737]: E0715 05:33:13.545700 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ad6ca6fc257076bf442fadeb9c182416a27c59f91c56961f620baaf48787cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.545981 kubelet[2737]: E0715 05:33:13.545943 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ad6ca6fc257076bf442fadeb9c182416a27c59f91c56961f620baaf48787cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w4rqj" Jul 15 05:33:13.546190 kubelet[2737]: E0715 05:33:13.546068 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8412946cbfbc176020bbf29b641afcfa7e4661847cd37c275209e49635231f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.546190 kubelet[2737]: E0715 05:33:13.546131 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8412946cbfbc176020bbf29b641afcfa7e4661847cd37c275209e49635231f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cpztt" Jul 15 05:33:13.546190 kubelet[2737]: E0715 05:33:13.546145 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8412946cbfbc176020bbf29b641afcfa7e4661847cd37c275209e49635231f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cpztt" Jul 15 05:33:13.546399 kubelet[2737]: E0715 05:33:13.546285 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ad6ca6fc257076bf442fadeb9c182416a27c59f91c56961f620baaf48787cce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w4rqj" Jul 15 05:33:13.546531 kubelet[2737]: E0715 05:33:13.546443 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cpztt_kube-system(71c71ad4-1367-46ee-b402-3c8bcaa7064a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cpztt_kube-system(71c71ad4-1367-46ee-b402-3c8bcaa7064a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8412946cbfbc176020bbf29b641afcfa7e4661847cd37c275209e49635231f68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cpztt" podUID="71c71ad4-1367-46ee-b402-3c8bcaa7064a" Jul 15 05:33:13.546910 kubelet[2737]: E0715 05:33:13.546325 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w4rqj_kube-system(110a4b1b-441e-4629-b994-a1d817c572c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w4rqj_kube-system(110a4b1b-441e-4629-b994-a1d817c572c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ad6ca6fc257076bf442fadeb9c182416a27c59f91c56961f620baaf48787cce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w4rqj" podUID="110a4b1b-441e-4629-b994-a1d817c572c6" Jul 15 05:33:13.557830 containerd[1551]: time="2025-07-15T05:33:13.557792529Z" level=error msg="Failed to destroy network for sandbox \"0fa976858e21a420c9c4f01aca02aeb62da5ff0c1f72493fd94118862154890c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.559928 systemd[1]: run-netns-cni\x2dd6f84887\x2d47c6\x2da1e3\x2db610\x2d14094815e1ba.mount: Deactivated successfully. Jul 15 05:33:13.562186 containerd[1551]: time="2025-07-15T05:33:13.562148495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85ccff6498-v45rk,Uid:6765a94e-ca61-470d-b923-4780beee4dfa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa976858e21a420c9c4f01aca02aeb62da5ff0c1f72493fd94118862154890c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.573101 kubelet[2737]: E0715 05:33:13.572884 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa976858e21a420c9c4f01aca02aeb62da5ff0c1f72493fd94118862154890c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.573101 kubelet[2737]: E0715 05:33:13.573026 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa976858e21a420c9c4f01aca02aeb62da5ff0c1f72493fd94118862154890c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85ccff6498-v45rk" Jul 15 05:33:13.573101 kubelet[2737]: E0715 05:33:13.573040 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fa976858e21a420c9c4f01aca02aeb62da5ff0c1f72493fd94118862154890c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85ccff6498-v45rk" Jul 15 05:33:13.573293 kubelet[2737]: E0715 05:33:13.573215 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85ccff6498-v45rk_calico-system(6765a94e-ca61-470d-b923-4780beee4dfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85ccff6498-v45rk_calico-system(6765a94e-ca61-470d-b923-4780beee4dfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fa976858e21a420c9c4f01aca02aeb62da5ff0c1f72493fd94118862154890c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85ccff6498-v45rk" podUID="6765a94e-ca61-470d-b923-4780beee4dfa" Jul 15 05:33:13.613244 containerd[1551]: time="2025-07-15T05:33:13.613211764Z" level=error msg="Failed to destroy network for sandbox \"b18fa0c097d18f561f0f2576454fdbfa01a1f4d523195386920a2e14d842d336\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.616751 containerd[1551]: time="2025-07-15T05:33:13.615826379Z" level=error msg="Failed to destroy network for sandbox \"12d00c87fe6b2f61831c1de2f19ab8db231bacfd3ec5c14e9df742a0cfa386e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.616185 systemd[1]: run-netns-cni\x2d4b950d29\x2da4a4\x2def4b\x2da703\x2da707e7572416.mount: Deactivated successfully. Jul 15 05:33:13.618500 containerd[1551]: time="2025-07-15T05:33:13.618455643Z" level=error msg="Failed to destroy network for sandbox \"f7d54626272e4106bff9278a151467db65b147de7f6c82f8ec372fa94a536106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.618633 containerd[1551]: time="2025-07-15T05:33:13.618600209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb9cb5f4-9fnr2,Uid:efe7ad1c-d3cd-4848-b56b-05835e9bdae6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18fa0c097d18f561f0f2576454fdbfa01a1f4d523195386920a2e14d842d336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.618951 kubelet[2737]: E0715 05:33:13.618897 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18fa0c097d18f561f0f2576454fdbfa01a1f4d523195386920a2e14d842d336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.618996 kubelet[2737]: E0715 05:33:13.618959 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18fa0c097d18f561f0f2576454fdbfa01a1f4d523195386920a2e14d842d336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fb9cb5f4-9fnr2" Jul 15 05:33:13.618996 kubelet[2737]: E0715 05:33:13.618979 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18fa0c097d18f561f0f2576454fdbfa01a1f4d523195386920a2e14d842d336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fb9cb5f4-9fnr2" Jul 15 05:33:13.619337 kubelet[2737]: E0715 05:33:13.619029 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57fb9cb5f4-9fnr2_calico-system(efe7ad1c-d3cd-4848-b56b-05835e9bdae6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57fb9cb5f4-9fnr2_calico-system(efe7ad1c-d3cd-4848-b56b-05835e9bdae6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b18fa0c097d18f561f0f2576454fdbfa01a1f4d523195386920a2e14d842d336\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57fb9cb5f4-9fnr2" podUID="efe7ad1c-d3cd-4848-b56b-05835e9bdae6" Jul 15 05:33:13.620374 containerd[1551]: time="2025-07-15T05:33:13.620351438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-qrqgk,Uid:31c4da21-35cd-49df-bc2f-abe8b087a31b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7d54626272e4106bff9278a151467db65b147de7f6c82f8ec372fa94a536106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.620816 containerd[1551]: time="2025-07-15T05:33:13.620791035Z" level=error msg="Failed to destroy network for sandbox \"3267c15564571895e624d9d6839d7527183ed23b918a31e4af88a180a82b75aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.620901 kubelet[2737]: E0715 05:33:13.620727 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7d54626272e4106bff9278a151467db65b147de7f6c82f8ec372fa94a536106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.620930 kubelet[2737]: E0715 05:33:13.620913 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7d54626272e4106bff9278a151467db65b147de7f6c82f8ec372fa94a536106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b4fb66-qrqgk" Jul 15 05:33:13.620971 kubelet[2737]: E0715 05:33:13.620930 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7d54626272e4106bff9278a151467db65b147de7f6c82f8ec372fa94a536106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b4fb66-qrqgk" Jul 15 05:33:13.620994 kubelet[2737]: E0715 05:33:13.620969 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b4fb66-qrqgk_calico-apiserver(31c4da21-35cd-49df-bc2f-abe8b087a31b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b4fb66-qrqgk_calico-apiserver(31c4da21-35cd-49df-bc2f-abe8b087a31b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7d54626272e4106bff9278a151467db65b147de7f6c82f8ec372fa94a536106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b4fb66-qrqgk" podUID="31c4da21-35cd-49df-bc2f-abe8b087a31b" Jul 15 05:33:13.621700 containerd[1551]: time="2025-07-15T05:33:13.621672870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-zlsp9,Uid:744412fe-caf0-40c9-a293-a02870e7919d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d00c87fe6b2f61831c1de2f19ab8db231bacfd3ec5c14e9df742a0cfa386e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.622381 kubelet[2737]: E0715 05:33:13.622351 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d00c87fe6b2f61831c1de2f19ab8db231bacfd3ec5c14e9df742a0cfa386e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.622413 kubelet[2737]: E0715 05:33:13.622381 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d00c87fe6b2f61831c1de2f19ab8db231bacfd3ec5c14e9df742a0cfa386e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b4fb66-zlsp9" Jul 15 05:33:13.622413 kubelet[2737]: E0715 05:33:13.622394 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d00c87fe6b2f61831c1de2f19ab8db231bacfd3ec5c14e9df742a0cfa386e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b4fb66-zlsp9" Jul 15 05:33:13.622578 kubelet[2737]: E0715 05:33:13.622502 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b4fb66-zlsp9_calico-apiserver(744412fe-caf0-40c9-a293-a02870e7919d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b4fb66-zlsp9_calico-apiserver(744412fe-caf0-40c9-a293-a02870e7919d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12d00c87fe6b2f61831c1de2f19ab8db231bacfd3ec5c14e9df742a0cfa386e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b4fb66-zlsp9" podUID="744412fe-caf0-40c9-a293-a02870e7919d" Jul 15 05:33:13.622578 kubelet[2737]: E0715 05:33:13.622550 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3267c15564571895e624d9d6839d7527183ed23b918a31e4af88a180a82b75aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.622578 kubelet[2737]: E0715 05:33:13.622568 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3267c15564571895e624d9d6839d7527183ed23b918a31e4af88a180a82b75aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vpk4j" Jul 15 05:33:13.623185 containerd[1551]: time="2025-07-15T05:33:13.622322796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vpk4j,Uid:bd15ad43-a0fe-48b6-a4a4-a4c5f213d373,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3267c15564571895e624d9d6839d7527183ed23b918a31e4af88a180a82b75aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:13.623219 kubelet[2737]: E0715 05:33:13.622579 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3267c15564571895e624d9d6839d7527183ed23b918a31e4af88a180a82b75aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vpk4j" Jul 15 05:33:13.623219 kubelet[2737]: E0715 05:33:13.622600 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-vpk4j_calico-system(bd15ad43-a0fe-48b6-a4a4-a4c5f213d373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-vpk4j_calico-system(bd15ad43-a0fe-48b6-a4a4-a4c5f213d373)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3267c15564571895e624d9d6839d7527183ed23b918a31e4af88a180a82b75aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-vpk4j" podUID="bd15ad43-a0fe-48b6-a4a4-a4c5f213d373" Jul 15 05:33:13.910041 containerd[1551]: time="2025-07-15T05:33:13.909866899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:33:13.912301 kubelet[2737]: I0715 05:33:13.911939 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:33:13.915442 kubelet[2737]: E0715 05:33:13.915399 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:14.480629 systemd[1]: run-netns-cni\x2d99a80012\x2d623b\x2d9d3d\x2d4370\x2dcda01813fb0f.mount: Deactivated successfully. Jul 15 05:33:14.480998 systemd[1]: run-netns-cni\x2dba6b617b\x2d8f8b\x2dd524\x2d0cce\x2ddd478502d847.mount: Deactivated successfully. Jul 15 05:33:14.481156 systemd[1]: run-netns-cni\x2d4cda3d7c\x2df551\x2d1781\x2d81cf\x2d2afdb8855d9f.mount: Deactivated successfully. Jul 15 05:33:14.818530 systemd[1]: Created slice kubepods-besteffort-pod462befc7_9998_48e9_9fe3_be8ad0e74203.slice - libcontainer container kubepods-besteffort-pod462befc7_9998_48e9_9fe3_be8ad0e74203.slice. Jul 15 05:33:14.823111 containerd[1551]: time="2025-07-15T05:33:14.823061292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62x4x,Uid:462befc7-9998-48e9-9fe3-be8ad0e74203,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:14.899841 containerd[1551]: time="2025-07-15T05:33:14.899713016Z" level=error msg="Failed to destroy network for sandbox \"67e1ac82aaced14d806c6d9bcc04bf46d2a95024bf0c6acd63116f1e7ac95528\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:14.902814 containerd[1551]: time="2025-07-15T05:33:14.902769657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62x4x,Uid:462befc7-9998-48e9-9fe3-be8ad0e74203,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1ac82aaced14d806c6d9bcc04bf46d2a95024bf0c6acd63116f1e7ac95528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:14.903792 systemd[1]: run-netns-cni\x2dd24a20ce\x2d7e11\x2dd72c\x2dbc88\x2de4b712933a71.mount: Deactivated successfully. Jul 15 05:33:14.906364 kubelet[2737]: E0715 05:33:14.905227 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1ac82aaced14d806c6d9bcc04bf46d2a95024bf0c6acd63116f1e7ac95528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:33:14.906364 kubelet[2737]: E0715 05:33:14.905313 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1ac82aaced14d806c6d9bcc04bf46d2a95024bf0c6acd63116f1e7ac95528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-62x4x" Jul 15 05:33:14.906364 kubelet[2737]: E0715 05:33:14.905340 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1ac82aaced14d806c6d9bcc04bf46d2a95024bf0c6acd63116f1e7ac95528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-62x4x" Jul 15 05:33:14.906486 kubelet[2737]: E0715 05:33:14.905412 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-62x4x_calico-system(462befc7-9998-48e9-9fe3-be8ad0e74203)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-62x4x_calico-system(462befc7-9998-48e9-9fe3-be8ad0e74203)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67e1ac82aaced14d806c6d9bcc04bf46d2a95024bf0c6acd63116f1e7ac95528\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-62x4x" podUID="462befc7-9998-48e9-9fe3-be8ad0e74203" Jul 15 05:33:14.911711 kubelet[2737]: E0715 05:33:14.911674 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:17.042874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490486397.mount: Deactivated successfully. Jul 15 05:33:17.071129 containerd[1551]: time="2025-07-15T05:33:17.070890947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:17.071853 containerd[1551]: time="2025-07-15T05:33:17.071650300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:33:17.072355 containerd[1551]: time="2025-07-15T05:33:17.072317710Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:17.073638 containerd[1551]: time="2025-07-15T05:33:17.073611932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:17.074057 containerd[1551]: time="2025-07-15T05:33:17.074031091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.164115877s" Jul 15 05:33:17.074154 containerd[1551]: time="2025-07-15T05:33:17.074139853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:33:17.100853 containerd[1551]: time="2025-07-15T05:33:17.100820566Z" level=info msg="CreateContainer within sandbox \"7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:33:17.108113 containerd[1551]: time="2025-07-15T05:33:17.106013245Z" level=info msg="Container c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:17.113713 containerd[1551]: time="2025-07-15T05:33:17.113690038Z" level=info msg="CreateContainer within sandbox \"7d01a1a6a0afc9f426170d578683ffcf4a01bb2f75d2116776e314736756b5d2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\"" Jul 15 05:33:17.114167 containerd[1551]: time="2025-07-15T05:33:17.114068720Z" level=info msg="StartContainer for \"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\"" Jul 15 05:33:17.115545 containerd[1551]: time="2025-07-15T05:33:17.115526160Z" level=info msg="connecting to shim c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084" address="unix:///run/containerd/s/a8924df1e9cc3946ce8bce0b70eff0fcb837c991e94e6f252ec8537f0b19ebb2" protocol=ttrpc version=3 Jul 15 05:33:17.155033 systemd[1]: Started cri-containerd-c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084.scope - libcontainer container c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084. Jul 15 05:33:17.212203 containerd[1551]: time="2025-07-15T05:33:17.212054420Z" level=info msg="StartContainer for \"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" returns successfully" Jul 15 05:33:17.280960 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:33:17.281119 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:33:17.464559 kubelet[2737]: I0715 05:33:17.464509 2737 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qpbl\" (UniqueName: \"kubernetes.io/projected/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-kube-api-access-9qpbl\") pod \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\" (UID: \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\") " Jul 15 05:33:17.465103 kubelet[2737]: I0715 05:33:17.465045 2737 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-backend-key-pair\") pod \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\" (UID: \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\") " Jul 15 05:33:17.465103 kubelet[2737]: I0715 05:33:17.465098 2737 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-ca-bundle\") pod \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\" (UID: \"efe7ad1c-d3cd-4848-b56b-05835e9bdae6\") " Jul 15 05:33:17.465424 kubelet[2737]: I0715 05:33:17.465394 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "efe7ad1c-d3cd-4848-b56b-05835e9bdae6" (UID: "efe7ad1c-d3cd-4848-b56b-05835e9bdae6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:33:17.470004 kubelet[2737]: I0715 05:33:17.469819 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "efe7ad1c-d3cd-4848-b56b-05835e9bdae6" (UID: "efe7ad1c-d3cd-4848-b56b-05835e9bdae6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 05:33:17.472259 kubelet[2737]: I0715 05:33:17.472222 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-kube-api-access-9qpbl" (OuterVolumeSpecName: "kube-api-access-9qpbl") pod "efe7ad1c-d3cd-4848-b56b-05835e9bdae6" (UID: "efe7ad1c-d3cd-4848-b56b-05835e9bdae6"). InnerVolumeSpecName "kube-api-access-9qpbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:33:17.566371 kubelet[2737]: I0715 05:33:17.566308 2737 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9qpbl\" (UniqueName: \"kubernetes.io/projected/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-kube-api-access-9qpbl\") on node \"172-237-155-110\" DevicePath \"\"" Jul 15 05:33:17.566371 kubelet[2737]: I0715 05:33:17.566332 2737 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-backend-key-pair\") on node \"172-237-155-110\" DevicePath \"\"" Jul 15 05:33:17.566371 kubelet[2737]: I0715 05:33:17.566340 2737 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efe7ad1c-d3cd-4848-b56b-05835e9bdae6-whisker-ca-bundle\") on node \"172-237-155-110\" DevicePath \"\"" Jul 15 05:33:17.811898 systemd[1]: Removed slice kubepods-besteffort-podefe7ad1c_d3cd_4848_b56b_05835e9bdae6.slice - libcontainer container kubepods-besteffort-podefe7ad1c_d3cd_4848_b56b_05835e9bdae6.slice. Jul 15 05:33:17.931971 kubelet[2737]: I0715 05:33:17.931922 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xwgmt" podStartSLOduration=1.669807298 podStartE2EDuration="10.931909078s" podCreationTimestamp="2025-07-15 05:33:07 +0000 UTC" firstStartedPulling="2025-07-15 05:33:07.812580842 +0000 UTC m=+18.138522101" lastFinishedPulling="2025-07-15 05:33:17.074682612 +0000 UTC m=+27.400623881" observedRunningTime="2025-07-15 05:33:17.931322862 +0000 UTC m=+28.257264141" watchObservedRunningTime="2025-07-15 05:33:17.931909078 +0000 UTC m=+28.257850337" Jul 15 05:33:17.978001 systemd[1]: Created slice kubepods-besteffort-pod98437203_55bd_4772_9d50_567e938bfe96.slice - libcontainer container kubepods-besteffort-pod98437203_55bd_4772_9d50_567e938bfe96.slice. Jul 15 05:33:18.043931 systemd[1]: var-lib-kubelet-pods-efe7ad1c\x2dd3cd\x2d4848\x2db56b\x2d05835e9bdae6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9qpbl.mount: Deactivated successfully. Jul 15 05:33:18.044018 systemd[1]: var-lib-kubelet-pods-efe7ad1c\x2dd3cd\x2d4848\x2db56b\x2d05835e9bdae6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:33:18.069720 kubelet[2737]: I0715 05:33:18.069618 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98437203-55bd-4772-9d50-567e938bfe96-whisker-backend-key-pair\") pod \"whisker-6fd887f898-wcnht\" (UID: \"98437203-55bd-4772-9d50-567e938bfe96\") " pod="calico-system/whisker-6fd887f898-wcnht" Jul 15 05:33:18.069720 kubelet[2737]: I0715 05:33:18.069659 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4gz\" (UniqueName: \"kubernetes.io/projected/98437203-55bd-4772-9d50-567e938bfe96-kube-api-access-pm4gz\") pod \"whisker-6fd887f898-wcnht\" (UID: \"98437203-55bd-4772-9d50-567e938bfe96\") " pod="calico-system/whisker-6fd887f898-wcnht" Jul 15 05:33:18.069720 kubelet[2737]: I0715 05:33:18.069680 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98437203-55bd-4772-9d50-567e938bfe96-whisker-ca-bundle\") pod \"whisker-6fd887f898-wcnht\" (UID: \"98437203-55bd-4772-9d50-567e938bfe96\") " pod="calico-system/whisker-6fd887f898-wcnht" Jul 15 05:33:18.281760 containerd[1551]: time="2025-07-15T05:33:18.281726707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd887f898-wcnht,Uid:98437203-55bd-4772-9d50-567e938bfe96,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:18.402649 systemd-networkd[1457]: cali1c62f82e244: Link UP Jul 15 05:33:18.403116 systemd-networkd[1457]: cali1c62f82e244: Gained carrier Jul 15 05:33:18.417021 containerd[1551]: 2025-07-15 05:33:18.306 [INFO][3846] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:33:18.417021 containerd[1551]: 2025-07-15 05:33:18.340 [INFO][3846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0 whisker-6fd887f898- calico-system 98437203-55bd-4772-9d50-567e938bfe96 893 0 2025-07-15 05:33:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fd887f898 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-155-110 whisker-6fd887f898-wcnht eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1c62f82e244 [] [] }} ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-" Jul 15 05:33:18.417021 containerd[1551]: 2025-07-15 05:33:18.341 [INFO][3846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" Jul 15 05:33:18.417021 containerd[1551]: 2025-07-15 05:33:18.363 [INFO][3859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" HandleID="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Workload="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.363 [INFO][3859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" HandleID="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Workload="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-155-110", "pod":"whisker-6fd887f898-wcnht", "timestamp":"2025-07-15 05:33:18.363384389 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.363 [INFO][3859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.363 [INFO][3859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.363 [INFO][3859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.369 [INFO][3859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" host="172-237-155-110" Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.373 [INFO][3859] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.377 [INFO][3859] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.378 [INFO][3859] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.383 [INFO][3859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:18.417199 containerd[1551]: 2025-07-15 05:33:18.383 [INFO][3859] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" host="172-237-155-110" Jul 15 05:33:18.417369 containerd[1551]: 2025-07-15 05:33:18.384 [INFO][3859] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f Jul 15 05:33:18.417369 containerd[1551]: 2025-07-15 05:33:18.387 [INFO][3859] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" host="172-237-155-110" Jul 15 05:33:18.417369 containerd[1551]: 2025-07-15 05:33:18.391 [INFO][3859] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.1/26] block=192.168.39.0/26 handle="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" host="172-237-155-110" Jul 15 05:33:18.417369 containerd[1551]: 2025-07-15 05:33:18.391 [INFO][3859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.1/26] handle="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" host="172-237-155-110" Jul 15 05:33:18.417369 containerd[1551]: 2025-07-15 05:33:18.391 [INFO][3859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:18.417369 containerd[1551]: 2025-07-15 05:33:18.391 [INFO][3859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.1/26] IPv6=[] ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" HandleID="k8s-pod-network.6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Workload="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" Jul 15 05:33:18.417462 containerd[1551]: 2025-07-15 05:33:18.395 [INFO][3846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0", GenerateName:"whisker-6fd887f898-", Namespace:"calico-system", SelfLink:"", UID:"98437203-55bd-4772-9d50-567e938bfe96", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd887f898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"whisker-6fd887f898-wcnht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.39.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c62f82e244", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:18.417462 containerd[1551]: 2025-07-15 05:33:18.395 [INFO][3846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.1/32] ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" Jul 15 05:33:18.417513 containerd[1551]: 2025-07-15 05:33:18.395 [INFO][3846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c62f82e244 ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" Jul 15 05:33:18.417513 containerd[1551]: 2025-07-15 05:33:18.403 [INFO][3846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" Jul 15 05:33:18.417608 containerd[1551]: 2025-07-15 05:33:18.404 [INFO][3846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0", GenerateName:"whisker-6fd887f898-", Namespace:"calico-system", SelfLink:"", UID:"98437203-55bd-4772-9d50-567e938bfe96", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd887f898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f", Pod:"whisker-6fd887f898-wcnht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.39.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c62f82e244", MAC:"ee:67:85:4c:38:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:18.417658 containerd[1551]: 2025-07-15 05:33:18.412 [INFO][3846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" Namespace="calico-system" Pod="whisker-6fd887f898-wcnht" WorkloadEndpoint="172--237--155--110-k8s-whisker--6fd887f898--wcnht-eth0" Jul 15 05:33:18.445308 containerd[1551]: time="2025-07-15T05:33:18.445239558Z" level=info msg="connecting to shim 6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f" address="unix:///run/containerd/s/3d453dc8301be7b426381e86e650d567334a05257b37b7ab61401dfa8394a47f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:18.469191 systemd[1]: Started cri-containerd-6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f.scope - libcontainer container 6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f. Jul 15 05:33:18.514120 containerd[1551]: time="2025-07-15T05:33:18.514059805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd887f898-wcnht,Uid:98437203-55bd-4772-9d50-567e938bfe96,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f\"" Jul 15 05:33:18.515769 containerd[1551]: time="2025-07-15T05:33:18.515732207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:33:18.924273 kubelet[2737]: I0715 05:33:18.924241 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:33:19.132157 systemd-networkd[1457]: vxlan.calico: Link UP Jul 15 05:33:19.132170 systemd-networkd[1457]: vxlan.calico: Gained carrier Jul 15 05:33:19.809540 kubelet[2737]: I0715 05:33:19.809486 2737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe7ad1c-d3cd-4848-b56b-05835e9bdae6" path="/var/lib/kubelet/pods/efe7ad1c-d3cd-4848-b56b-05835e9bdae6/volumes" Jul 15 05:33:19.929924 containerd[1551]: time="2025-07-15T05:33:19.929888338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:19.930904 containerd[1551]: time="2025-07-15T05:33:19.930794878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:33:19.931300 containerd[1551]: time="2025-07-15T05:33:19.931275006Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:19.933126 containerd[1551]: time="2025-07-15T05:33:19.932536343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:19.933126 containerd[1551]: time="2025-07-15T05:33:19.933021931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.41718919s" Jul 15 05:33:19.933126 containerd[1551]: time="2025-07-15T05:33:19.933048639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:33:19.938539 containerd[1551]: time="2025-07-15T05:33:19.938509078Z" level=info msg="CreateContainer within sandbox \"6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:33:19.947690 containerd[1551]: time="2025-07-15T05:33:19.947169776Z" level=info msg="Container 0e4cb0294a52b0ddc1b2b47d7e3edd8f56214eff2f0689259f96b77450b394f6: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:19.948790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553962111.mount: Deactivated successfully. Jul 15 05:33:19.954639 containerd[1551]: time="2025-07-15T05:33:19.954611474Z" level=info msg="CreateContainer within sandbox \"6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0e4cb0294a52b0ddc1b2b47d7e3edd8f56214eff2f0689259f96b77450b394f6\"" Jul 15 05:33:19.957168 containerd[1551]: time="2025-07-15T05:33:19.957129587Z" level=info msg="StartContainer for \"0e4cb0294a52b0ddc1b2b47d7e3edd8f56214eff2f0689259f96b77450b394f6\"" Jul 15 05:33:19.960244 containerd[1551]: time="2025-07-15T05:33:19.960218473Z" level=info msg="connecting to shim 0e4cb0294a52b0ddc1b2b47d7e3edd8f56214eff2f0689259f96b77450b394f6" address="unix:///run/containerd/s/3d453dc8301be7b426381e86e650d567334a05257b37b7ab61401dfa8394a47f" protocol=ttrpc version=3 Jul 15 05:33:19.989219 systemd[1]: Started cri-containerd-0e4cb0294a52b0ddc1b2b47d7e3edd8f56214eff2f0689259f96b77450b394f6.scope - libcontainer container 0e4cb0294a52b0ddc1b2b47d7e3edd8f56214eff2f0689259f96b77450b394f6. Jul 15 05:33:20.032316 containerd[1551]: time="2025-07-15T05:33:20.032229002Z" level=info msg="StartContainer for \"0e4cb0294a52b0ddc1b2b47d7e3edd8f56214eff2f0689259f96b77450b394f6\" returns successfully" Jul 15 05:33:20.033484 containerd[1551]: time="2025-07-15T05:33:20.033425998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:33:20.379196 systemd-networkd[1457]: cali1c62f82e244: Gained IPv6LL Jul 15 05:33:20.827300 systemd-networkd[1457]: vxlan.calico: Gained IPv6LL Jul 15 05:33:21.226038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508407025.mount: Deactivated successfully. Jul 15 05:33:21.235345 containerd[1551]: time="2025-07-15T05:33:21.235310192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:21.236093 containerd[1551]: time="2025-07-15T05:33:21.235889168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:33:21.236608 containerd[1551]: time="2025-07-15T05:33:21.236589038Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:21.238421 containerd[1551]: time="2025-07-15T05:33:21.238395773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:21.239918 containerd[1551]: time="2025-07-15T05:33:21.239895376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.206332647s" Jul 15 05:33:21.239955 containerd[1551]: time="2025-07-15T05:33:21.239922344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:33:21.244779 containerd[1551]: time="2025-07-15T05:33:21.244751453Z" level=info msg="CreateContainer within sandbox \"6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:33:21.251835 containerd[1551]: time="2025-07-15T05:33:21.251252156Z" level=info msg="Container d6e3d99468f6e5c1a0f8d1fef94928da245a84e993a1e80eea563be44e22ab86: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:21.253966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102188606.mount: Deactivated successfully. Jul 15 05:33:21.262486 containerd[1551]: time="2025-07-15T05:33:21.262466714Z" level=info msg="CreateContainer within sandbox \"6b4e43ddcc114402da1c80374107252e465f3e9b2359e319e91ddb64db3d9a3f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d6e3d99468f6e5c1a0f8d1fef94928da245a84e993a1e80eea563be44e22ab86\"" Jul 15 05:33:21.263327 containerd[1551]: time="2025-07-15T05:33:21.263309315Z" level=info msg="StartContainer for \"d6e3d99468f6e5c1a0f8d1fef94928da245a84e993a1e80eea563be44e22ab86\"" Jul 15 05:33:21.264032 containerd[1551]: time="2025-07-15T05:33:21.264015294Z" level=info msg="connecting to shim d6e3d99468f6e5c1a0f8d1fef94928da245a84e993a1e80eea563be44e22ab86" address="unix:///run/containerd/s/3d453dc8301be7b426381e86e650d567334a05257b37b7ab61401dfa8394a47f" protocol=ttrpc version=3 Jul 15 05:33:21.289193 systemd[1]: Started cri-containerd-d6e3d99468f6e5c1a0f8d1fef94928da245a84e993a1e80eea563be44e22ab86.scope - libcontainer container d6e3d99468f6e5c1a0f8d1fef94928da245a84e993a1e80eea563be44e22ab86. Jul 15 05:33:21.333068 containerd[1551]: time="2025-07-15T05:33:21.333035824Z" level=info msg="StartContainer for \"d6e3d99468f6e5c1a0f8d1fef94928da245a84e993a1e80eea563be44e22ab86\" returns successfully" Jul 15 05:33:21.946720 kubelet[2737]: I0715 05:33:21.946646 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6fd887f898-wcnht" podStartSLOduration=2.219625698 podStartE2EDuration="4.946525731s" podCreationTimestamp="2025-07-15 05:33:17 +0000 UTC" firstStartedPulling="2025-07-15 05:33:18.515290329 +0000 UTC m=+28.841231598" lastFinishedPulling="2025-07-15 05:33:21.242190362 +0000 UTC m=+31.568131631" observedRunningTime="2025-07-15 05:33:21.943947741 +0000 UTC m=+32.269889020" watchObservedRunningTime="2025-07-15 05:33:21.946525731 +0000 UTC m=+32.272467010" Jul 15 05:33:24.262884 kubelet[2737]: I0715 05:33:24.262166 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:33:24.316414 containerd[1551]: time="2025-07-15T05:33:24.316366409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" id:\"266f12377cf5e531a570abed8bf32083b998aee2b93f0fadcae6ccc84b00f13e\" pid:4207 exited_at:{seconds:1752557604 nanos:316045205}" Jul 15 05:33:24.384655 containerd[1551]: time="2025-07-15T05:33:24.384613572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" id:\"421199551f0fa2a9d503cc6c0da4abe37c9e18d9d6584083cba6546ab8dba8dd\" pid:4231 exited_at:{seconds:1752557604 nanos:383195810}" Jul 15 05:33:24.805796 containerd[1551]: time="2025-07-15T05:33:24.805757401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-zlsp9,Uid:744412fe-caf0-40c9-a293-a02870e7919d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:33:24.883113 systemd-networkd[1457]: cali49b02adce76: Link UP Jul 15 05:33:24.883812 systemd-networkd[1457]: cali49b02adce76: Gained carrier Jul 15 05:33:24.894228 containerd[1551]: 2025-07-15 05:33:24.832 [INFO][4252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0 calico-apiserver-57b4fb66- calico-apiserver 744412fe-caf0-40c9-a293-a02870e7919d 823 0 2025-07-15 05:33:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b4fb66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-155-110 calico-apiserver-57b4fb66-zlsp9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali49b02adce76 [] [] }} ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-" Jul 15 05:33:24.894228 containerd[1551]: 2025-07-15 05:33:24.832 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" Jul 15 05:33:24.894228 containerd[1551]: 2025-07-15 05:33:24.852 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" HandleID="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Workload="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.852 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" HandleID="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Workload="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-155-110", "pod":"calico-apiserver-57b4fb66-zlsp9", "timestamp":"2025-07-15 05:33:24.852272365 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.852 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.852 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.852 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.858 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" host="172-237-155-110" Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.862 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.865 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.866 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:24.894379 containerd[1551]: 2025-07-15 05:33:24.868 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:24.894534 containerd[1551]: 2025-07-15 05:33:24.868 [INFO][4264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" host="172-237-155-110" Jul 15 05:33:24.894534 containerd[1551]: 2025-07-15 05:33:24.869 [INFO][4264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1 Jul 15 05:33:24.894534 containerd[1551]: 2025-07-15 05:33:24.873 [INFO][4264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" host="172-237-155-110" Jul 15 05:33:24.894534 containerd[1551]: 2025-07-15 05:33:24.877 [INFO][4264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.2/26] block=192.168.39.0/26 handle="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" host="172-237-155-110" Jul 15 05:33:24.894534 containerd[1551]: 2025-07-15 05:33:24.877 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.2/26] handle="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" host="172-237-155-110" Jul 15 05:33:24.894534 containerd[1551]: 2025-07-15 05:33:24.877 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:24.894534 containerd[1551]: 2025-07-15 05:33:24.877 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.2/26] IPv6=[] ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" HandleID="k8s-pod-network.da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Workload="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" Jul 15 05:33:24.894670 containerd[1551]: 2025-07-15 05:33:24.879 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0", GenerateName:"calico-apiserver-57b4fb66-", Namespace:"calico-apiserver", SelfLink:"", UID:"744412fe-caf0-40c9-a293-a02870e7919d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b4fb66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"calico-apiserver-57b4fb66-zlsp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49b02adce76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:24.894762 containerd[1551]: 2025-07-15 05:33:24.879 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.2/32] ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" Jul 15 05:33:24.894762 containerd[1551]: 2025-07-15 05:33:24.879 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49b02adce76 ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" Jul 15 05:33:24.894762 containerd[1551]: 2025-07-15 05:33:24.882 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" Jul 15 05:33:24.894818 containerd[1551]: 2025-07-15 05:33:24.883 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0", GenerateName:"calico-apiserver-57b4fb66-", Namespace:"calico-apiserver", SelfLink:"", UID:"744412fe-caf0-40c9-a293-a02870e7919d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b4fb66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1", Pod:"calico-apiserver-57b4fb66-zlsp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49b02adce76", MAC:"b6:95:bb:5b:7a:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:24.894857 containerd[1551]: 2025-07-15 05:33:24.889 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-zlsp9" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--zlsp9-eth0" Jul 15 05:33:24.918055 containerd[1551]: time="2025-07-15T05:33:24.917968100Z" level=info msg="connecting to shim da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1" address="unix:///run/containerd/s/130f8995ab2dce352bac8906e3b60a334c59c9538397b57430ae537b93bb0fa0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:24.945180 systemd[1]: Started cri-containerd-da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1.scope - libcontainer container da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1. Jul 15 05:33:24.984620 containerd[1551]: time="2025-07-15T05:33:24.984595190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-zlsp9,Uid:744412fe-caf0-40c9-a293-a02870e7919d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1\"" Jul 15 05:33:24.985887 containerd[1551]: time="2025-07-15T05:33:24.985854400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:33:26.406309 containerd[1551]: time="2025-07-15T05:33:26.405722695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:26.406309 containerd[1551]: time="2025-07-15T05:33:26.406280829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:33:26.406831 containerd[1551]: time="2025-07-15T05:33:26.406813894Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:26.407838 containerd[1551]: time="2025-07-15T05:33:26.407807908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:26.408778 containerd[1551]: time="2025-07-15T05:33:26.408746615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.422786s" Jul 15 05:33:26.408829 containerd[1551]: time="2025-07-15T05:33:26.408779553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:33:26.412652 containerd[1551]: time="2025-07-15T05:33:26.412616306Z" level=info msg="CreateContainer within sandbox \"da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:33:26.419773 containerd[1551]: time="2025-07-15T05:33:26.419274748Z" level=info msg="Container 6d7242dbe41c34f038fd8a4f3166411abb4e0e40d7e2f3aeffc39613bfa609eb: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:26.423451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217758691.mount: Deactivated successfully. Jul 15 05:33:26.426108 containerd[1551]: time="2025-07-15T05:33:26.426066594Z" level=info msg="CreateContainer within sandbox \"da4152301c919666c0f9d03120b6c7032dc282530e343b2c8b99de11cdb16bf1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d7242dbe41c34f038fd8a4f3166411abb4e0e40d7e2f3aeffc39613bfa609eb\"" Jul 15 05:33:26.426621 containerd[1551]: time="2025-07-15T05:33:26.426470706Z" level=info msg="StartContainer for \"6d7242dbe41c34f038fd8a4f3166411abb4e0e40d7e2f3aeffc39613bfa609eb\"" Jul 15 05:33:26.427761 containerd[1551]: time="2025-07-15T05:33:26.427733227Z" level=info msg="connecting to shim 6d7242dbe41c34f038fd8a4f3166411abb4e0e40d7e2f3aeffc39613bfa609eb" address="unix:///run/containerd/s/130f8995ab2dce352bac8906e3b60a334c59c9538397b57430ae537b93bb0fa0" protocol=ttrpc version=3 Jul 15 05:33:26.455173 systemd[1]: Started cri-containerd-6d7242dbe41c34f038fd8a4f3166411abb4e0e40d7e2f3aeffc39613bfa609eb.scope - libcontainer container 6d7242dbe41c34f038fd8a4f3166411abb4e0e40d7e2f3aeffc39613bfa609eb. Jul 15 05:33:26.497439 containerd[1551]: time="2025-07-15T05:33:26.497373539Z" level=info msg="StartContainer for \"6d7242dbe41c34f038fd8a4f3166411abb4e0e40d7e2f3aeffc39613bfa609eb\" returns successfully" Jul 15 05:33:26.807227 kubelet[2737]: E0715 05:33:26.806469 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:26.808557 containerd[1551]: time="2025-07-15T05:33:26.808399486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w4rqj,Uid:110a4b1b-441e-4629-b994-a1d817c572c6,Namespace:kube-system,Attempt:0,}" Jul 15 05:33:26.917135 systemd-networkd[1457]: cali49b02adce76: Gained IPv6LL Jul 15 05:33:26.917333 systemd-networkd[1457]: cali34413435777: Link UP Jul 15 05:33:26.917482 systemd-networkd[1457]: cali34413435777: Gained carrier Jul 15 05:33:26.936026 containerd[1551]: 2025-07-15 05:33:26.843 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0 coredns-674b8bbfcf- kube-system 110a4b1b-441e-4629-b994-a1d817c572c6 806 0 2025-07-15 05:32:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-155-110 coredns-674b8bbfcf-w4rqj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34413435777 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-" Jul 15 05:33:26.936026 containerd[1551]: 2025-07-15 05:33:26.843 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" Jul 15 05:33:26.936026 containerd[1551]: 2025-07-15 05:33:26.871 [INFO][4382] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" HandleID="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Workload="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.871 [INFO][4382] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" HandleID="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Workload="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-155-110", "pod":"coredns-674b8bbfcf-w4rqj", "timestamp":"2025-07-15 05:33:26.871654303 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.871 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.871 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.872 [INFO][4382] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.877 [INFO][4382] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" host="172-237-155-110" Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.882 [INFO][4382] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.886 [INFO][4382] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.891 [INFO][4382] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.893 [INFO][4382] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:26.936182 containerd[1551]: 2025-07-15 05:33:26.893 [INFO][4382] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" host="172-237-155-110" Jul 15 05:33:26.936364 containerd[1551]: 2025-07-15 05:33:26.895 [INFO][4382] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8 Jul 15 05:33:26.936364 containerd[1551]: 2025-07-15 05:33:26.899 [INFO][4382] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" host="172-237-155-110" Jul 15 05:33:26.936364 containerd[1551]: 2025-07-15 05:33:26.904 [INFO][4382] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.3/26] block=192.168.39.0/26 handle="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" host="172-237-155-110" Jul 15 05:33:26.936364 containerd[1551]: 2025-07-15 05:33:26.904 [INFO][4382] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.3/26] handle="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" host="172-237-155-110" Jul 15 05:33:26.936364 containerd[1551]: 2025-07-15 05:33:26.904 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:26.936364 containerd[1551]: 2025-07-15 05:33:26.904 [INFO][4382] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.3/26] IPv6=[] ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" HandleID="k8s-pod-network.b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Workload="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" Jul 15 05:33:26.936466 containerd[1551]: 2025-07-15 05:33:26.907 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"110a4b1b-441e-4629-b994-a1d817c572c6", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"coredns-674b8bbfcf-w4rqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34413435777", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:26.936466 containerd[1551]: 2025-07-15 05:33:26.907 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.3/32] ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" Jul 15 05:33:26.936466 containerd[1551]: 2025-07-15 05:33:26.907 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34413435777 ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" Jul 15 05:33:26.936466 containerd[1551]: 2025-07-15 05:33:26.912 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" Jul 15 05:33:26.936466 containerd[1551]: 2025-07-15 05:33:26.913 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"110a4b1b-441e-4629-b994-a1d817c572c6", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8", Pod:"coredns-674b8bbfcf-w4rqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34413435777", MAC:"3a:6e:be:9e:47:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:26.936466 containerd[1551]: 2025-07-15 05:33:26.924 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-w4rqj" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--w4rqj-eth0" Jul 15 05:33:26.961787 containerd[1551]: time="2025-07-15T05:33:26.961732191Z" level=info msg="connecting to shim b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8" address="unix:///run/containerd/s/1dd9023ca9c0fcb4f96f920fdf7c5954dd66b6c5f709d2c1d37bbaefd20acc7b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:26.971107 kubelet[2737]: I0715 05:33:26.970883 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57b4fb66-zlsp9" podStartSLOduration=20.547121654 podStartE2EDuration="21.970868799s" podCreationTimestamp="2025-07-15 05:33:05 +0000 UTC" firstStartedPulling="2025-07-15 05:33:24.98563957 +0000 UTC m=+35.311580829" lastFinishedPulling="2025-07-15 05:33:26.409386705 +0000 UTC m=+36.735327974" observedRunningTime="2025-07-15 05:33:26.969825987 +0000 UTC m=+37.295767266" watchObservedRunningTime="2025-07-15 05:33:26.970868799 +0000 UTC m=+37.296810068" Jul 15 05:33:26.995923 systemd[1]: Started cri-containerd-b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8.scope - libcontainer container b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8. Jul 15 05:33:27.041906 containerd[1551]: time="2025-07-15T05:33:27.041869884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w4rqj,Uid:110a4b1b-441e-4629-b994-a1d817c572c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8\"" Jul 15 05:33:27.042534 kubelet[2737]: E0715 05:33:27.042518 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:27.047531 containerd[1551]: time="2025-07-15T05:33:27.047503894Z" level=info msg="CreateContainer within sandbox \"b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:33:27.054878 containerd[1551]: time="2025-07-15T05:33:27.054850870Z" level=info msg="Container 0f8f21dba41bf83312ccb8b2852ea2557b4a7a1fffd84e54db89273f8fa59df7: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:27.060891 containerd[1551]: time="2025-07-15T05:33:27.060820378Z" level=info msg="CreateContainer within sandbox \"b0031d99ec0a6a8ddc857f8ce4ef8139c8a8dff88811400a1aaea22720f010a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f8f21dba41bf83312ccb8b2852ea2557b4a7a1fffd84e54db89273f8fa59df7\"" Jul 15 05:33:27.061511 containerd[1551]: time="2025-07-15T05:33:27.061479984Z" level=info msg="StartContainer for \"0f8f21dba41bf83312ccb8b2852ea2557b4a7a1fffd84e54db89273f8fa59df7\"" Jul 15 05:33:27.062953 containerd[1551]: time="2025-07-15T05:33:27.062920650Z" level=info msg="connecting to shim 0f8f21dba41bf83312ccb8b2852ea2557b4a7a1fffd84e54db89273f8fa59df7" address="unix:///run/containerd/s/1dd9023ca9c0fcb4f96f920fdf7c5954dd66b6c5f709d2c1d37bbaefd20acc7b" protocol=ttrpc version=3 Jul 15 05:33:27.080214 systemd[1]: Started cri-containerd-0f8f21dba41bf83312ccb8b2852ea2557b4a7a1fffd84e54db89273f8fa59df7.scope - libcontainer container 0f8f21dba41bf83312ccb8b2852ea2557b4a7a1fffd84e54db89273f8fa59df7. Jul 15 05:33:27.108207 containerd[1551]: time="2025-07-15T05:33:27.108180535Z" level=info msg="StartContainer for \"0f8f21dba41bf83312ccb8b2852ea2557b4a7a1fffd84e54db89273f8fa59df7\" returns successfully" Jul 15 05:33:27.813679 containerd[1551]: time="2025-07-15T05:33:27.813432945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62x4x,Uid:462befc7-9998-48e9-9fe3-be8ad0e74203,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:27.815036 containerd[1551]: time="2025-07-15T05:33:27.814551034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-qrqgk,Uid:31c4da21-35cd-49df-bc2f-abe8b087a31b,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:33:27.965038 kubelet[2737]: E0715 05:33:27.964990 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:27.966234 kubelet[2737]: I0715 05:33:27.965708 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:33:27.974155 systemd-networkd[1457]: cali70248d6ada0: Link UP Jul 15 05:33:27.975134 systemd-networkd[1457]: cali70248d6ada0: Gained carrier Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.862 [INFO][4486] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-csi--node--driver--62x4x-eth0 csi-node-driver- calico-system 462befc7-9998-48e9-9fe3-be8ad0e74203 721 0 2025-07-15 05:33:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-155-110 csi-node-driver-62x4x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali70248d6ada0 [] [] }} ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.862 [INFO][4486] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.914 [INFO][4500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" HandleID="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Workload="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.914 [INFO][4500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" HandleID="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Workload="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd640), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-155-110", "pod":"csi-node-driver-62x4x", "timestamp":"2025-07-15 05:33:27.911513693 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.914 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.914 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.914 [INFO][4500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.929 [INFO][4500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.934 [INFO][4500] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.938 [INFO][4500] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.940 [INFO][4500] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.941 [INFO][4500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.941 [INFO][4500] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.943 [INFO][4500] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.949 [INFO][4500] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.954 [INFO][4500] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.4/26] block=192.168.39.0/26 handle="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.954 [INFO][4500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.4/26] handle="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" host="172-237-155-110" Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.954 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:27.989253 containerd[1551]: 2025-07-15 05:33:27.954 [INFO][4500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.4/26] IPv6=[] ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" HandleID="k8s-pod-network.8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Workload="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" Jul 15 05:33:27.989670 containerd[1551]: 2025-07-15 05:33:27.965 [INFO][4486] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-csi--node--driver--62x4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"462befc7-9998-48e9-9fe3-be8ad0e74203", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"csi-node-driver-62x4x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70248d6ada0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:27.989670 containerd[1551]: 2025-07-15 05:33:27.965 [INFO][4486] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.4/32] ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" Jul 15 05:33:27.989670 containerd[1551]: 2025-07-15 05:33:27.966 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70248d6ada0 ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" Jul 15 05:33:27.989670 containerd[1551]: 2025-07-15 05:33:27.973 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" Jul 15 05:33:27.989670 containerd[1551]: 2025-07-15 05:33:27.973 [INFO][4486] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-csi--node--driver--62x4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"462befc7-9998-48e9-9fe3-be8ad0e74203", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d", Pod:"csi-node-driver-62x4x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.39.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70248d6ada0", MAC:"b6:78:42:5b:e7:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:27.989670 containerd[1551]: 2025-07-15 05:33:27.985 [INFO][4486] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" Namespace="calico-system" Pod="csi-node-driver-62x4x" WorkloadEndpoint="172--237--155--110-k8s-csi--node--driver--62x4x-eth0" Jul 15 05:33:28.021007 containerd[1551]: time="2025-07-15T05:33:28.020974068Z" level=info msg="connecting to shim 8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d" address="unix:///run/containerd/s/0a137f941ee73bcb7b7eff38935d3b212159168929649d6ed2d44b3501f610d6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:28.041311 kubelet[2737]: I0715 05:33:28.041207 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w4rqj" podStartSLOduration=32.041192207 podStartE2EDuration="32.041192207s" podCreationTimestamp="2025-07-15 05:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:33:27.990740794 +0000 UTC m=+38.316682063" watchObservedRunningTime="2025-07-15 05:33:28.041192207 +0000 UTC m=+38.367133466" Jul 15 05:33:28.058463 systemd[1]: Started cri-containerd-8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d.scope - libcontainer container 8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d. Jul 15 05:33:28.109030 systemd-networkd[1457]: cali9e1adf41a15: Link UP Jul 15 05:33:28.112289 systemd-networkd[1457]: cali9e1adf41a15: Gained carrier Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:27.875 [INFO][4477] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0 calico-apiserver-57b4fb66- calico-apiserver 31c4da21-35cd-49df-bc2f-abe8b087a31b 821 0 2025-07-15 05:33:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b4fb66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-155-110 calico-apiserver-57b4fb66-qrqgk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e1adf41a15 [] [] }} ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:27.875 [INFO][4477] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:27.919 [INFO][4505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" HandleID="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Workload="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:27.920 [INFO][4505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" HandleID="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Workload="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-155-110", "pod":"calico-apiserver-57b4fb66-qrqgk", "timestamp":"2025-07-15 05:33:27.918889629 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:27.920 [INFO][4505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:27.955 [INFO][4505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:27.955 [INFO][4505] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.043 [INFO][4505] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.071 [INFO][4505] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.080 [INFO][4505] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.083 [INFO][4505] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.085 [INFO][4505] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.085 [INFO][4505] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.087 [INFO][4505] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.091 [INFO][4505] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.095 [INFO][4505] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.5/26] block=192.168.39.0/26 handle="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.096 [INFO][4505] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.5/26] handle="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" host="172-237-155-110" Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.097 [INFO][4505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:28.128670 containerd[1551]: 2025-07-15 05:33:28.097 [INFO][4505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.5/26] IPv6=[] ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" HandleID="k8s-pod-network.e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Workload="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" Jul 15 05:33:28.129135 containerd[1551]: 2025-07-15 05:33:28.101 [INFO][4477] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0", GenerateName:"calico-apiserver-57b4fb66-", Namespace:"calico-apiserver", SelfLink:"", UID:"31c4da21-35cd-49df-bc2f-abe8b087a31b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b4fb66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"calico-apiserver-57b4fb66-qrqgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e1adf41a15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:28.129135 containerd[1551]: 2025-07-15 05:33:28.101 [INFO][4477] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.5/32] ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" Jul 15 05:33:28.129135 containerd[1551]: 2025-07-15 05:33:28.101 [INFO][4477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e1adf41a15 ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" Jul 15 05:33:28.129135 containerd[1551]: 2025-07-15 05:33:28.109 [INFO][4477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" Jul 15 05:33:28.129135 containerd[1551]: 2025-07-15 05:33:28.110 [INFO][4477] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0", GenerateName:"calico-apiserver-57b4fb66-", Namespace:"calico-apiserver", SelfLink:"", UID:"31c4da21-35cd-49df-bc2f-abe8b087a31b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b4fb66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e", Pod:"calico-apiserver-57b4fb66-qrqgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.39.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e1adf41a15", MAC:"ca:f3:ae:3d:13:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:28.129135 containerd[1551]: 2025-07-15 05:33:28.119 [INFO][4477] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" Namespace="calico-apiserver" Pod="calico-apiserver-57b4fb66-qrqgk" WorkloadEndpoint="172--237--155--110-k8s-calico--apiserver--57b4fb66--qrqgk-eth0" Jul 15 05:33:28.152190 containerd[1551]: time="2025-07-15T05:33:28.152152130Z" level=info msg="connecting to shim e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e" address="unix:///run/containerd/s/15f689078c3db54fabd239522009ed114fdf56764b8ff1b16bdd4d6c8aede203" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:28.157921 containerd[1551]: time="2025-07-15T05:33:28.157787576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-62x4x,Uid:462befc7-9998-48e9-9fe3-be8ad0e74203,Namespace:calico-system,Attempt:0,} returns sandbox id \"8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d\"" Jul 15 05:33:28.160995 containerd[1551]: time="2025-07-15T05:33:28.160917243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:33:28.182211 systemd[1]: Started cri-containerd-e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e.scope - libcontainer container e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e. Jul 15 05:33:28.260677 containerd[1551]: time="2025-07-15T05:33:28.260612514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b4fb66-qrqgk,Uid:31c4da21-35cd-49df-bc2f-abe8b087a31b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e\"" Jul 15 05:33:28.265713 containerd[1551]: time="2025-07-15T05:33:28.265433429Z" level=info msg="CreateContainer within sandbox \"e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:33:28.269568 containerd[1551]: time="2025-07-15T05:33:28.269537331Z" level=info msg="Container c69d8ed0d2c2927041044cc7fc56db4826f0f4aee0f4d936eabd19275043b16c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:28.273439 containerd[1551]: time="2025-07-15T05:33:28.273413530Z" level=info msg="CreateContainer within sandbox \"e5d38f97b5f0d9ee335ebf787ca36a89fc7c910bbf55e14e3a70d589a65be24e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c69d8ed0d2c2927041044cc7fc56db4826f0f4aee0f4d936eabd19275043b16c\"" Jul 15 05:33:28.274650 containerd[1551]: time="2025-07-15T05:33:28.274539490Z" level=info msg="StartContainer for \"c69d8ed0d2c2927041044cc7fc56db4826f0f4aee0f4d936eabd19275043b16c\"" Jul 15 05:33:28.275693 containerd[1551]: time="2025-07-15T05:33:28.275676538Z" level=info msg="connecting to shim c69d8ed0d2c2927041044cc7fc56db4826f0f4aee0f4d936eabd19275043b16c" address="unix:///run/containerd/s/15f689078c3db54fabd239522009ed114fdf56764b8ff1b16bdd4d6c8aede203" protocol=ttrpc version=3 Jul 15 05:33:28.298170 systemd[1]: Started cri-containerd-c69d8ed0d2c2927041044cc7fc56db4826f0f4aee0f4d936eabd19275043b16c.scope - libcontainer container c69d8ed0d2c2927041044cc7fc56db4826f0f4aee0f4d936eabd19275043b16c. Jul 15 05:33:28.355193 containerd[1551]: time="2025-07-15T05:33:28.355164701Z" level=info msg="StartContainer for \"c69d8ed0d2c2927041044cc7fc56db4826f0f4aee0f4d936eabd19275043b16c\" returns successfully" Jul 15 05:33:28.763337 systemd-networkd[1457]: cali34413435777: Gained IPv6LL Jul 15 05:33:28.806733 containerd[1551]: time="2025-07-15T05:33:28.806689696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85ccff6498-v45rk,Uid:6765a94e-ca61-470d-b923-4780beee4dfa,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:28.808232 kubelet[2737]: E0715 05:33:28.808199 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:28.810093 containerd[1551]: time="2025-07-15T05:33:28.809227324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vpk4j,Uid:bd15ad43-a0fe-48b6-a4a4-a4c5f213d373,Namespace:calico-system,Attempt:0,}" Jul 15 05:33:28.810093 containerd[1551]: time="2025-07-15T05:33:28.809359159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpztt,Uid:71c71ad4-1367-46ee-b402-3c8bcaa7064a,Namespace:kube-system,Attempt:0,}" Jul 15 05:33:28.906798 containerd[1551]: time="2025-07-15T05:33:28.906757424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:28.908338 containerd[1551]: time="2025-07-15T05:33:28.907187378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:33:28.913093 containerd[1551]: time="2025-07-15T05:33:28.912831154Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:28.913672 containerd[1551]: time="2025-07-15T05:33:28.913637225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 752.375295ms" Jul 15 05:33:28.913713 containerd[1551]: time="2025-07-15T05:33:28.913673403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:33:28.916105 containerd[1551]: time="2025-07-15T05:33:28.913885586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:28.918402 containerd[1551]: time="2025-07-15T05:33:28.918374693Z" level=info msg="CreateContainer within sandbox \"8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:33:28.928933 containerd[1551]: time="2025-07-15T05:33:28.927261991Z" level=info msg="Container 90d963d5de7b93d658622bb2e0e0c37f6d5a74901312fc99bd1cbcbc6cdd30a0: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:28.935251 containerd[1551]: time="2025-07-15T05:33:28.934316566Z" level=info msg="CreateContainer within sandbox \"8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90d963d5de7b93d658622bb2e0e0c37f6d5a74901312fc99bd1cbcbc6cdd30a0\"" Jul 15 05:33:28.939286 containerd[1551]: time="2025-07-15T05:33:28.939257847Z" level=info msg="StartContainer for \"90d963d5de7b93d658622bb2e0e0c37f6d5a74901312fc99bd1cbcbc6cdd30a0\"" Jul 15 05:33:28.940838 containerd[1551]: time="2025-07-15T05:33:28.940808111Z" level=info msg="connecting to shim 90d963d5de7b93d658622bb2e0e0c37f6d5a74901312fc99bd1cbcbc6cdd30a0" address="unix:///run/containerd/s/0a137f941ee73bcb7b7eff38935d3b212159168929649d6ed2d44b3501f610d6" protocol=ttrpc version=3 Jul 15 05:33:28.972394 kubelet[2737]: E0715 05:33:28.972363 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:29.010598 systemd[1]: Started cri-containerd-90d963d5de7b93d658622bb2e0e0c37f6d5a74901312fc99bd1cbcbc6cdd30a0.scope - libcontainer container 90d963d5de7b93d658622bb2e0e0c37f6d5a74901312fc99bd1cbcbc6cdd30a0. Jul 15 05:33:29.060804 systemd-networkd[1457]: cali98242cc1b5e: Link UP Jul 15 05:33:29.062970 systemd-networkd[1457]: cali98242cc1b5e: Gained carrier Jul 15 05:33:29.074317 kubelet[2737]: I0715 05:33:29.074156 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57b4fb66-qrqgk" podStartSLOduration=24.074139038 podStartE2EDuration="24.074139038s" podCreationTimestamp="2025-07-15 05:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:33:28.991204677 +0000 UTC m=+39.317145946" watchObservedRunningTime="2025-07-15 05:33:29.074139038 +0000 UTC m=+39.400080307" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:28.948 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0 goldmane-768f4c5c69- calico-system bd15ad43-a0fe-48b6-a4a4-a4c5f213d373 822 0 2025-07-15 05:33:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-155-110 goldmane-768f4c5c69-vpk4j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali98242cc1b5e [] [] }} ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:28.948 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.010 [INFO][4720] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" HandleID="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Workload="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.010 [INFO][4720] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" HandleID="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Workload="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d930), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-155-110", "pod":"goldmane-768f4c5c69-vpk4j", "timestamp":"2025-07-15 05:33:29.010226757 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.010 [INFO][4720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.010 [INFO][4720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.010 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.020 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.025 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.029 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.031 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.034 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.034 [INFO][4720] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.036 [INFO][4720] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.041 [INFO][4720] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.046 [INFO][4720] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.6/26] block=192.168.39.0/26 handle="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.046 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.6/26] handle="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" host="172-237-155-110" Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.047 [INFO][4720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:29.078704 containerd[1551]: 2025-07-15 05:33:29.047 [INFO][4720] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.6/26] IPv6=[] ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" HandleID="k8s-pod-network.68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Workload="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" Jul 15 05:33:29.080790 containerd[1551]: 2025-07-15 05:33:29.052 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd15ad43-a0fe-48b6-a4a4-a4c5f213d373", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"goldmane-768f4c5c69-vpk4j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.39.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98242cc1b5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:29.080790 containerd[1551]: 2025-07-15 05:33:29.054 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.6/32] ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" Jul 15 05:33:29.080790 containerd[1551]: 2025-07-15 05:33:29.055 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98242cc1b5e ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" Jul 15 05:33:29.080790 containerd[1551]: 2025-07-15 05:33:29.060 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" Jul 15 05:33:29.080790 containerd[1551]: 2025-07-15 05:33:29.061 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd15ad43-a0fe-48b6-a4a4-a4c5f213d373", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb", Pod:"goldmane-768f4c5c69-vpk4j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.39.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98242cc1b5e", MAC:"76:60:c4:f6:45:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:29.080790 containerd[1551]: 2025-07-15 05:33:29.072 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" Namespace="calico-system" Pod="goldmane-768f4c5c69-vpk4j" WorkloadEndpoint="172--237--155--110-k8s-goldmane--768f4c5c69--vpk4j-eth0" Jul 15 05:33:29.101497 containerd[1551]: time="2025-07-15T05:33:29.101447336Z" level=info msg="connecting to shim 68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb" address="unix:///run/containerd/s/46c99c8410cf870071aab6d094f0d5a5c355eb86ddc7b86d267c763de66c85c6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:29.152303 systemd[1]: Started cri-containerd-68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb.scope - libcontainer container 68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb. Jul 15 05:33:29.188065 containerd[1551]: time="2025-07-15T05:33:29.188028559Z" level=info msg="StartContainer for \"90d963d5de7b93d658622bb2e0e0c37f6d5a74901312fc99bd1cbcbc6cdd30a0\" returns successfully" Jul 15 05:33:29.190517 containerd[1551]: time="2025-07-15T05:33:29.190228232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:33:29.197619 systemd-networkd[1457]: cali42ab183ecfc: Link UP Jul 15 05:33:29.199429 systemd-networkd[1457]: cali42ab183ecfc: Gained carrier Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:28.943 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0 coredns-674b8bbfcf- kube-system 71c71ad4-1367-46ee-b402-3c8bcaa7064a 814 0 2025-07-15 05:32:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-155-110 coredns-674b8bbfcf-cpztt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali42ab183ecfc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:28.944 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.010 [INFO][4714] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" HandleID="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Workload="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.014 [INFO][4714] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" HandleID="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Workload="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333740), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-155-110", "pod":"coredns-674b8bbfcf-cpztt", "timestamp":"2025-07-15 05:33:29.010010035 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.015 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.046 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.046 [INFO][4714] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.127 [INFO][4714] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.136 [INFO][4714] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.147 [INFO][4714] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.151 [INFO][4714] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.157 [INFO][4714] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.157 [INFO][4714] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.163 [INFO][4714] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.171 [INFO][4714] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.178 [INFO][4714] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.7/26] block=192.168.39.0/26 handle="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.179 [INFO][4714] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.7/26] handle="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" host="172-237-155-110" Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.179 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:29.215345 containerd[1551]: 2025-07-15 05:33:29.179 [INFO][4714] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.7/26] IPv6=[] ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" HandleID="k8s-pod-network.9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Workload="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" Jul 15 05:33:29.215720 containerd[1551]: 2025-07-15 05:33:29.190 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71c71ad4-1367-46ee-b402-3c8bcaa7064a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"coredns-674b8bbfcf-cpztt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42ab183ecfc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:29.215720 containerd[1551]: 2025-07-15 05:33:29.190 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.7/32] ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" Jul 15 05:33:29.215720 containerd[1551]: 2025-07-15 05:33:29.190 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42ab183ecfc ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" Jul 15 05:33:29.215720 containerd[1551]: 2025-07-15 05:33:29.198 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" Jul 15 05:33:29.215720 containerd[1551]: 2025-07-15 05:33:29.199 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71c71ad4-1367-46ee-b402-3c8bcaa7064a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a", Pod:"coredns-674b8bbfcf-cpztt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.39.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42ab183ecfc", MAC:"fa:0a:4b:99:30:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:29.215720 containerd[1551]: 2025-07-15 05:33:29.209 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" Namespace="kube-system" Pod="coredns-674b8bbfcf-cpztt" WorkloadEndpoint="172--237--155--110-k8s-coredns--674b8bbfcf--cpztt-eth0" Jul 15 05:33:29.242614 containerd[1551]: time="2025-07-15T05:33:29.241377711Z" level=info msg="connecting to shim 9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a" address="unix:///run/containerd/s/3c8280d3de88fe3a24103404859d246d63c662c343c6e38259ea3fddd4a0761a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:29.275387 systemd[1]: Started cri-containerd-9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a.scope - libcontainer container 9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a. Jul 15 05:33:29.285974 systemd-networkd[1457]: cali730f18dc8ce: Link UP Jul 15 05:33:29.292110 systemd-networkd[1457]: cali730f18dc8ce: Gained carrier Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:28.958 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0 calico-kube-controllers-85ccff6498- calico-system 6765a94e-ca61-470d-b923-4780beee4dfa 816 0 2025-07-15 05:33:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85ccff6498 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-155-110 calico-kube-controllers-85ccff6498-v45rk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali730f18dc8ce [] [] }} ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:28.958 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.048 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" HandleID="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Workload="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.049 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" HandleID="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Workload="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb50), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-155-110", "pod":"calico-kube-controllers-85ccff6498-v45rk", "timestamp":"2025-07-15 05:33:29.048735572 +0000 UTC"}, Hostname:"172-237-155-110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.049 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.179 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.180 [INFO][4740] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-155-110' Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.221 [INFO][4740] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.240 [INFO][4740] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.247 [INFO][4740] ipam/ipam.go 511: Trying affinity for 192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.249 [INFO][4740] ipam/ipam.go 158: Attempting to load block cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.253 [INFO][4740] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.39.0/26 host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.253 [INFO][4740] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.39.0/26 handle="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.255 [INFO][4740] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.261 [INFO][4740] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.39.0/26 handle="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.269 [INFO][4740] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.39.8/26] block=192.168.39.0/26 handle="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.269 [INFO][4740] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.39.8/26] handle="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" host="172-237-155-110" Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.269 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:33:29.307576 containerd[1551]: 2025-07-15 05:33:29.269 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.39.8/26] IPv6=[] ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" HandleID="k8s-pod-network.cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Workload="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" Jul 15 05:33:29.307960 containerd[1551]: 2025-07-15 05:33:29.275 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0", GenerateName:"calico-kube-controllers-85ccff6498-", Namespace:"calico-system", SelfLink:"", UID:"6765a94e-ca61-470d-b923-4780beee4dfa", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85ccff6498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"", Pod:"calico-kube-controllers-85ccff6498-v45rk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali730f18dc8ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:29.307960 containerd[1551]: 2025-07-15 05:33:29.275 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.39.8/32] ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" Jul 15 05:33:29.307960 containerd[1551]: 2025-07-15 05:33:29.276 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali730f18dc8ce ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" Jul 15 05:33:29.307960 containerd[1551]: 2025-07-15 05:33:29.293 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" Jul 15 05:33:29.307960 containerd[1551]: 2025-07-15 05:33:29.293 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0", GenerateName:"calico-kube-controllers-85ccff6498-", Namespace:"calico-system", SelfLink:"", UID:"6765a94e-ca61-470d-b923-4780beee4dfa", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85ccff6498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-155-110", ContainerID:"cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c", Pod:"calico-kube-controllers-85ccff6498-v45rk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.39.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali730f18dc8ce", MAC:"fa:20:92:8d:87:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:33:29.307960 containerd[1551]: 2025-07-15 05:33:29.303 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" Namespace="calico-system" Pod="calico-kube-controllers-85ccff6498-v45rk" WorkloadEndpoint="172--237--155--110-k8s-calico--kube--controllers--85ccff6498--v45rk-eth0" Jul 15 05:33:29.331870 containerd[1551]: time="2025-07-15T05:33:29.331796819Z" level=info msg="connecting to shim cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c" address="unix:///run/containerd/s/ea1aef21fdecd9456e1c9f4e2d4b6bd6031422b64bd38a71a6081f7404d3a05b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:33:29.362509 containerd[1551]: time="2025-07-15T05:33:29.362467929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cpztt,Uid:71c71ad4-1367-46ee-b402-3c8bcaa7064a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a\"" Jul 15 05:33:29.364587 kubelet[2737]: E0715 05:33:29.364179 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:29.368408 containerd[1551]: time="2025-07-15T05:33:29.368390171Z" level=info msg="CreateContainer within sandbox \"9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:33:29.378192 containerd[1551]: time="2025-07-15T05:33:29.378174456Z" level=info msg="Container e5abe9890c1ba1714e6e2ba963ed2de11cba7c4c98c137dbdc015bce0cb7e638: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:29.382707 containerd[1551]: time="2025-07-15T05:33:29.382668308Z" level=info msg="CreateContainer within sandbox \"9c9faada69828163f80b39eec23dfcd61a06571bef0246980c3e2147e2278b8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5abe9890c1ba1714e6e2ba963ed2de11cba7c4c98c137dbdc015bce0cb7e638\"" Jul 15 05:33:29.383232 containerd[1551]: time="2025-07-15T05:33:29.383217599Z" level=info msg="StartContainer for \"e5abe9890c1ba1714e6e2ba963ed2de11cba7c4c98c137dbdc015bce0cb7e638\"" Jul 15 05:33:29.383805 containerd[1551]: time="2025-07-15T05:33:29.383788759Z" level=info msg="connecting to shim e5abe9890c1ba1714e6e2ba963ed2de11cba7c4c98c137dbdc015bce0cb7e638" address="unix:///run/containerd/s/3c8280d3de88fe3a24103404859d246d63c662c343c6e38259ea3fddd4a0761a" protocol=ttrpc version=3 Jul 15 05:33:29.391693 systemd[1]: Started cri-containerd-cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c.scope - libcontainer container cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c. Jul 15 05:33:29.403215 systemd-networkd[1457]: cali9e1adf41a15: Gained IPv6LL Jul 15 05:33:29.422577 systemd[1]: Started cri-containerd-e5abe9890c1ba1714e6e2ba963ed2de11cba7c4c98c137dbdc015bce0cb7e638.scope - libcontainer container e5abe9890c1ba1714e6e2ba963ed2de11cba7c4c98c137dbdc015bce0cb7e638. Jul 15 05:33:29.475994 containerd[1551]: time="2025-07-15T05:33:29.475489071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vpk4j,Uid:bd15ad43-a0fe-48b6-a4a4-a4c5f213d373,Namespace:calico-system,Attempt:0,} returns sandbox id \"68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb\"" Jul 15 05:33:29.504173 containerd[1551]: time="2025-07-15T05:33:29.504138343Z" level=info msg="StartContainer for \"e5abe9890c1ba1714e6e2ba963ed2de11cba7c4c98c137dbdc015bce0cb7e638\" returns successfully" Jul 15 05:33:29.595598 systemd-networkd[1457]: cali70248d6ada0: Gained IPv6LL Jul 15 05:33:29.608476 containerd[1551]: time="2025-07-15T05:33:29.608431202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85ccff6498-v45rk,Uid:6765a94e-ca61-470d-b923-4780beee4dfa,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c\"" Jul 15 05:33:29.983847 kubelet[2737]: E0715 05:33:29.983815 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:29.992938 kubelet[2737]: E0715 05:33:29.992917 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:29.994934 kubelet[2737]: I0715 05:33:29.993142 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:33:30.014009 kubelet[2737]: I0715 05:33:30.013700 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cpztt" podStartSLOduration=34.01368898 podStartE2EDuration="34.01368898s" podCreationTimestamp="2025-07-15 05:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:33:29.996784343 +0000 UTC m=+40.322725612" watchObservedRunningTime="2025-07-15 05:33:30.01368898 +0000 UTC m=+40.339630249" Jul 15 05:33:30.289745 containerd[1551]: time="2025-07-15T05:33:30.289623476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:30.290619 containerd[1551]: time="2025-07-15T05:33:30.290592232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:33:30.291041 containerd[1551]: time="2025-07-15T05:33:30.291016968Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:30.292825 containerd[1551]: time="2025-07-15T05:33:30.292789527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:30.293787 containerd[1551]: time="2025-07-15T05:33:30.293755374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.103453705s" Jul 15 05:33:30.293827 containerd[1551]: time="2025-07-15T05:33:30.293786233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:33:30.296220 containerd[1551]: time="2025-07-15T05:33:30.295923560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:33:30.300845 containerd[1551]: time="2025-07-15T05:33:30.300820322Z" level=info msg="CreateContainer within sandbox \"8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:33:30.313261 containerd[1551]: time="2025-07-15T05:33:30.313229588Z" level=info msg="Container 47a59ebfc2bf03e6fa8a0111c074e543d1c2a0b8701eac31da24928a702f84a4: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:30.318987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937643280.mount: Deactivated successfully. Jul 15 05:33:30.323309 containerd[1551]: time="2025-07-15T05:33:30.323274574Z" level=info msg="CreateContainer within sandbox \"8be8fecb584582548fca412e50f5251e69b423e88f4216ef81263ebcdcea172d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47a59ebfc2bf03e6fa8a0111c074e543d1c2a0b8701eac31da24928a702f84a4\"" Jul 15 05:33:30.324124 containerd[1551]: time="2025-07-15T05:33:30.323843594Z" level=info msg="StartContainer for \"47a59ebfc2bf03e6fa8a0111c074e543d1c2a0b8701eac31da24928a702f84a4\"" Jul 15 05:33:30.324891 containerd[1551]: time="2025-07-15T05:33:30.324856630Z" level=info msg="connecting to shim 47a59ebfc2bf03e6fa8a0111c074e543d1c2a0b8701eac31da24928a702f84a4" address="unix:///run/containerd/s/0a137f941ee73bcb7b7eff38935d3b212159168929649d6ed2d44b3501f610d6" protocol=ttrpc version=3 Jul 15 05:33:30.364217 systemd[1]: Started cri-containerd-47a59ebfc2bf03e6fa8a0111c074e543d1c2a0b8701eac31da24928a702f84a4.scope - libcontainer container 47a59ebfc2bf03e6fa8a0111c074e543d1c2a0b8701eac31da24928a702f84a4. Jul 15 05:33:30.485224 containerd[1551]: time="2025-07-15T05:33:30.485193062Z" level=info msg="StartContainer for \"47a59ebfc2bf03e6fa8a0111c074e543d1c2a0b8701eac31da24928a702f84a4\" returns successfully" Jul 15 05:33:30.683287 systemd-networkd[1457]: cali98242cc1b5e: Gained IPv6LL Jul 15 05:33:30.683548 systemd-networkd[1457]: cali42ab183ecfc: Gained IPv6LL Jul 15 05:33:30.747324 systemd-networkd[1457]: cali730f18dc8ce: Gained IPv6LL Jul 15 05:33:30.863183 kubelet[2737]: I0715 05:33:30.863164 2737 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:33:30.864420 kubelet[2737]: I0715 05:33:30.864398 2737 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:33:30.999726 kubelet[2737]: E0715 05:33:30.999368 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:33:32.104592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956293864.mount: Deactivated successfully. Jul 15 05:33:32.534354 containerd[1551]: time="2025-07-15T05:33:32.534309404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:32.535374 containerd[1551]: time="2025-07-15T05:33:32.535346670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:33:32.535767 containerd[1551]: time="2025-07-15T05:33:32.535739797Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:32.538153 containerd[1551]: time="2025-07-15T05:33:32.538130740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:32.539018 containerd[1551]: time="2025-07-15T05:33:32.538873616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.242929607s" Jul 15 05:33:32.539018 containerd[1551]: time="2025-07-15T05:33:32.538998412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:33:32.541016 containerd[1551]: time="2025-07-15T05:33:32.540387487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:33:32.543304 containerd[1551]: time="2025-07-15T05:33:32.543110019Z" level=info msg="CreateContainer within sandbox \"68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:33:32.548045 containerd[1551]: time="2025-07-15T05:33:32.547979941Z" level=info msg="Container 042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:32.562659 containerd[1551]: time="2025-07-15T05:33:32.562627147Z" level=info msg="CreateContainer within sandbox \"68a71c8e473d5bad3f5857e17c120db50a5f3118008971845c96a63c9d58febb\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\"" Jul 15 05:33:32.563612 containerd[1551]: time="2025-07-15T05:33:32.563064493Z" level=info msg="StartContainer for \"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\"" Jul 15 05:33:32.564277 containerd[1551]: time="2025-07-15T05:33:32.564256594Z" level=info msg="connecting to shim 042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6" address="unix:///run/containerd/s/46c99c8410cf870071aab6d094f0d5a5c355eb86ddc7b86d267c763de66c85c6" protocol=ttrpc version=3 Jul 15 05:33:32.587213 systemd[1]: Started cri-containerd-042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6.scope - libcontainer container 042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6. Jul 15 05:33:32.651258 containerd[1551]: time="2025-07-15T05:33:32.651142181Z" level=info msg="StartContainer for \"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" returns successfully" Jul 15 05:33:33.028508 kubelet[2737]: I0715 05:33:33.028385 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-vpk4j" podStartSLOduration=23.966732972 podStartE2EDuration="27.028065402s" podCreationTimestamp="2025-07-15 05:33:06 +0000 UTC" firstStartedPulling="2025-07-15 05:33:29.478447297 +0000 UTC m=+39.804388566" lastFinishedPulling="2025-07-15 05:33:32.539779727 +0000 UTC m=+42.865720996" observedRunningTime="2025-07-15 05:33:33.02496501 +0000 UTC m=+43.350906279" watchObservedRunningTime="2025-07-15 05:33:33.028065402 +0000 UTC m=+43.354006671" Jul 15 05:33:33.031069 kubelet[2737]: I0715 05:33:33.030313 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-62x4x" podStartSLOduration=23.894922919 podStartE2EDuration="26.030305812s" podCreationTimestamp="2025-07-15 05:33:07 +0000 UTC" firstStartedPulling="2025-07-15 05:33:28.160177109 +0000 UTC m=+38.486118378" lastFinishedPulling="2025-07-15 05:33:30.295560012 +0000 UTC m=+40.621501271" observedRunningTime="2025-07-15 05:33:31.016884118 +0000 UTC m=+41.342825387" watchObservedRunningTime="2025-07-15 05:33:33.030305812 +0000 UTC m=+43.356247081" Jul 15 05:33:33.870868 containerd[1551]: time="2025-07-15T05:33:33.870796182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:33.873475 containerd[1551]: time="2025-07-15T05:33:33.873448709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:33:33.874343 containerd[1551]: time="2025-07-15T05:33:33.874309232Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:33.877656 containerd[1551]: time="2025-07-15T05:33:33.877626027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:33:33.878223 containerd[1551]: time="2025-07-15T05:33:33.878182150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 1.337372997s" Jul 15 05:33:33.878223 containerd[1551]: time="2025-07-15T05:33:33.878213259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:33:33.902601 containerd[1551]: time="2025-07-15T05:33:33.902574111Z" level=info msg="CreateContainer within sandbox \"cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:33:33.912213 containerd[1551]: time="2025-07-15T05:33:33.912184809Z" level=info msg="Container 21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:33:33.916758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029583044.mount: Deactivated successfully. Jul 15 05:33:33.921664 containerd[1551]: time="2025-07-15T05:33:33.921630821Z" level=info msg="CreateContainer within sandbox \"cd166d6307c5ffcca94bb19f69e9d72e1daf2403ddfa1a5c0b205e509f16b93c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\"" Jul 15 05:33:33.924152 containerd[1551]: time="2025-07-15T05:33:33.923225621Z" level=info msg="StartContainer for \"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\"" Jul 15 05:33:33.924386 containerd[1551]: time="2025-07-15T05:33:33.924368995Z" level=info msg="connecting to shim 21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64" address="unix:///run/containerd/s/ea1aef21fdecd9456e1c9f4e2d4b6bd6031422b64bd38a71a6081f7404d3a05b" protocol=ttrpc version=3 Jul 15 05:33:33.957136 systemd[1]: Started cri-containerd-21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64.scope - libcontainer container 21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64. Jul 15 05:33:34.019476 containerd[1551]: time="2025-07-15T05:33:34.019269503Z" level=info msg="StartContainer for \"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" returns successfully" Jul 15 05:33:34.100240 containerd[1551]: time="2025-07-15T05:33:34.100185454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"f0e4489391fe803f4afc4836a0b706db1bbc045f469dbfbcce136037ef3c1525\" pid:5115 exit_status:1 exited_at:{seconds:1752557614 nanos:99200204}" Jul 15 05:33:35.031497 kubelet[2737]: I0715 05:33:35.031437 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85ccff6498-v45rk" podStartSLOduration=23.762233502 podStartE2EDuration="28.031315231s" podCreationTimestamp="2025-07-15 05:33:07 +0000 UTC" firstStartedPulling="2025-07-15 05:33:29.611220084 +0000 UTC m=+39.937161353" lastFinishedPulling="2025-07-15 05:33:33.880301823 +0000 UTC m=+44.206243082" observedRunningTime="2025-07-15 05:33:35.030851765 +0000 UTC m=+45.356793044" watchObservedRunningTime="2025-07-15 05:33:35.031315231 +0000 UTC m=+45.357256490" Jul 15 05:33:35.081335 containerd[1551]: time="2025-07-15T05:33:35.081281872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"eb995aef62e97b80a0694ee5a8752946dcabb473014a2162582b9bf1a6b34d0b\" pid:5149 exited_at:{seconds:1752557615 nanos:80700779}" Jul 15 05:33:35.113822 containerd[1551]: time="2025-07-15T05:33:35.113784673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"124f34f48adb0c67118d806cc603fa9d26d7b9495d1572fa6ff96a3a62c18311\" pid:5163 exit_status:1 exited_at:{seconds:1752557615 nanos:113585779}" Jul 15 05:33:42.183127 kubelet[2737]: I0715 05:33:42.182914 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:33:53.149346 containerd[1551]: time="2025-07-15T05:33:53.149234087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"7995f5921b36e4fba71c5f2a18e294cfbac8e677c9f30d5a237ab6c38526bdb0\" pid:5205 exited_at:{seconds:1752557633 nanos:148218496}" Jul 15 05:33:54.374469 containerd[1551]: time="2025-07-15T05:33:54.374309558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" id:\"25b578a8ff90ba93375ef1ab5479ac021d78bd9a8ec163d2961b2d4fe9ece212\" pid:5227 exited_at:{seconds:1752557634 nanos:373923045}" Jul 15 05:34:04.805833 kubelet[2737]: E0715 05:34:04.805789 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:34:05.156157 containerd[1551]: time="2025-07-15T05:34:05.156127035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"c6da49248541648e11149818bffefec1fb1354a647e1a92a4f031f5be93ce9ff\" pid:5269 exited_at:{seconds:1752557645 nanos:155462194}" Jul 15 05:34:05.167764 containerd[1551]: time="2025-07-15T05:34:05.167681763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"2e58a86f930baab83b45445273bb3ebd7f95f39566ad2ef0a2865e5609b9d94e\" pid:5274 exited_at:{seconds:1752557645 nanos:167433136}" Jul 15 05:34:07.787720 kubelet[2737]: I0715 05:34:07.787361 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:34:07.806614 kubelet[2737]: E0715 05:34:07.806455 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:34:10.806281 kubelet[2737]: E0715 05:34:10.806250 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:34:11.805691 kubelet[2737]: E0715 05:34:11.805649 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:34:19.168765 containerd[1551]: time="2025-07-15T05:34:19.168706475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"8d3d4f6575baf0cda134aa15187e4b8c08716b7322ee9e6edc7dc48401a4d4f8\" pid:5312 exited_at:{seconds:1752557659 nanos:168387628}" Jul 15 05:34:24.391405 containerd[1551]: time="2025-07-15T05:34:24.391337761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" id:\"858d80666db7c6afaef7cfdcf4a45a040c4dc7731e152dad6136de0f44caf261\" pid:5335 exited_at:{seconds:1752557664 nanos:390884595}" Jul 15 05:34:32.806415 kubelet[2737]: E0715 05:34:32.806107 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:34:35.059509 containerd[1551]: time="2025-07-15T05:34:35.059469954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"7477a178c7f0d5d0d52492e52bfca9ce72023ed1aa856a519c915301a07ec7ef\" pid:5373 exited_at:{seconds:1752557675 nanos:59325236}" Jul 15 05:34:35.081485 containerd[1551]: time="2025-07-15T05:34:35.081461919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"e20855ac64dd867b6d2fe4e1cc8b8785c616ab27a2bfffd195cce45cefa579f9\" pid:5372 exited_at:{seconds:1752557675 nanos:81280121}" Jul 15 05:34:38.806025 kubelet[2737]: E0715 05:34:38.805972 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:34:40.806218 kubelet[2737]: E0715 05:34:40.805851 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:34:53.147528 containerd[1551]: time="2025-07-15T05:34:53.147450726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"90ca8e6330a883db17b2889dbb4ca832e2d4d32bb88b89f844ec7d9359964dad\" pid:5422 exited_at:{seconds:1752557693 nanos:147109518}" Jul 15 05:34:54.383934 containerd[1551]: time="2025-07-15T05:34:54.383896969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" id:\"8acc89613ecca6938cc05fa7310c26c09f50dd6fbeabd17c160197cc3313fe92\" pid:5445 exited_at:{seconds:1752557694 nanos:383443481}" Jul 15 05:35:05.075678 containerd[1551]: time="2025-07-15T05:35:05.075612773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"6dbe5e21d0f8fa7f9e419a65534cae6c205bda536b698adf7c039b2e47e577b3\" pid:5496 exited_at:{seconds:1752557705 nanos:75376744}" Jul 15 05:35:05.111767 containerd[1551]: time="2025-07-15T05:35:05.111709180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"fd1beb4560e7dd64f63fb8ae8972ec4c1bb3a01d2b3a845cfce74fc58b058570\" pid:5489 exited_at:{seconds:1752557705 nanos:111496991}" Jul 15 05:35:05.807257 kubelet[2737]: E0715 05:35:05.807178 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:35:06.573271 systemd[1]: Started sshd@7-172.237.155.110:22-139.178.68.195:60150.service - OpenSSH per-connection server daemon (139.178.68.195:60150). Jul 15 05:35:06.921341 sshd[5513]: Accepted publickey for core from 139.178.68.195 port 60150 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:06.924901 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:06.937702 systemd-logind[1538]: New session 8 of user core. Jul 15 05:35:06.941504 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:35:07.242530 sshd[5516]: Connection closed by 139.178.68.195 port 60150 Jul 15 05:35:07.243244 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:07.250033 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:35:07.250881 systemd[1]: sshd@7-172.237.155.110:22-139.178.68.195:60150.service: Deactivated successfully. Jul 15 05:35:07.253967 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:35:07.256339 systemd-logind[1538]: Removed session 8. Jul 15 05:35:12.308234 systemd[1]: Started sshd@8-172.237.155.110:22-139.178.68.195:40738.service - OpenSSH per-connection server daemon (139.178.68.195:40738). Jul 15 05:35:12.655381 sshd[5529]: Accepted publickey for core from 139.178.68.195 port 40738 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:12.657537 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:12.661672 systemd-logind[1538]: New session 9 of user core. Jul 15 05:35:12.669281 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:35:12.805548 kubelet[2737]: E0715 05:35:12.805521 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:35:12.950331 sshd[5532]: Connection closed by 139.178.68.195 port 40738 Jul 15 05:35:12.951053 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:12.954044 systemd[1]: sshd@8-172.237.155.110:22-139.178.68.195:40738.service: Deactivated successfully. Jul 15 05:35:12.955886 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:35:12.956838 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:35:12.958030 systemd-logind[1538]: Removed session 9. Jul 15 05:35:13.010455 systemd[1]: Started sshd@9-172.237.155.110:22-139.178.68.195:40754.service - OpenSSH per-connection server daemon (139.178.68.195:40754). Jul 15 05:35:13.348839 sshd[5545]: Accepted publickey for core from 139.178.68.195 port 40754 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:13.351471 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:13.356823 systemd-logind[1538]: New session 10 of user core. Jul 15 05:35:13.362223 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:35:13.689269 sshd[5548]: Connection closed by 139.178.68.195 port 40754 Jul 15 05:35:13.690615 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:13.695037 systemd[1]: sshd@9-172.237.155.110:22-139.178.68.195:40754.service: Deactivated successfully. Jul 15 05:35:13.696256 systemd-logind[1538]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:35:13.697116 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:35:13.698923 systemd-logind[1538]: Removed session 10. Jul 15 05:35:13.747997 systemd[1]: Started sshd@10-172.237.155.110:22-139.178.68.195:40758.service - OpenSSH per-connection server daemon (139.178.68.195:40758). Jul 15 05:35:14.088507 sshd[5558]: Accepted publickey for core from 139.178.68.195 port 40758 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:14.089654 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:14.094814 systemd-logind[1538]: New session 11 of user core. Jul 15 05:35:14.102181 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:35:14.383251 sshd[5561]: Connection closed by 139.178.68.195 port 40758 Jul 15 05:35:14.384516 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:14.388433 systemd-logind[1538]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:35:14.389179 systemd[1]: sshd@10-172.237.155.110:22-139.178.68.195:40758.service: Deactivated successfully. Jul 15 05:35:14.390978 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:35:14.392443 systemd-logind[1538]: Removed session 11. Jul 15 05:35:19.196551 containerd[1551]: time="2025-07-15T05:35:19.196460163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"43ffae80a38fb2f274d7323aff5d00caac0e71be981745f529fbd9847794994a\" pid:5590 exited_at:{seconds:1752557719 nanos:195901805}" Jul 15 05:35:19.447821 systemd[1]: Started sshd@11-172.237.155.110:22-139.178.68.195:40774.service - OpenSSH per-connection server daemon (139.178.68.195:40774). Jul 15 05:35:19.783448 sshd[5600]: Accepted publickey for core from 139.178.68.195 port 40774 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:19.785333 sshd-session[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:19.790972 systemd-logind[1538]: New session 12 of user core. Jul 15 05:35:19.795279 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:35:20.095208 sshd[5603]: Connection closed by 139.178.68.195 port 40774 Jul 15 05:35:20.095949 sshd-session[5600]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:20.102763 systemd-logind[1538]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:35:20.103675 systemd[1]: sshd@11-172.237.155.110:22-139.178.68.195:40774.service: Deactivated successfully. Jul 15 05:35:20.107170 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:35:20.109551 systemd-logind[1538]: Removed session 12. Jul 15 05:35:22.806269 kubelet[2737]: E0715 05:35:22.806223 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:35:23.808874 kubelet[2737]: E0715 05:35:23.808761 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:35:24.400964 containerd[1551]: time="2025-07-15T05:35:24.400848455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" id:\"3413d87c2ea00c4818a9585a8e44884a0c8dc5cf2756f3b8f8969cb03a98d7b4\" pid:5626 exited_at:{seconds:1752557724 nanos:398388847}" Jul 15 05:35:25.159882 systemd[1]: Started sshd@12-172.237.155.110:22-139.178.68.195:47244.service - OpenSSH per-connection server daemon (139.178.68.195:47244). Jul 15 05:35:25.497292 sshd[5638]: Accepted publickey for core from 139.178.68.195 port 47244 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:25.499813 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:25.507834 systemd-logind[1538]: New session 13 of user core. Jul 15 05:35:25.512260 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:35:25.802755 sshd[5641]: Connection closed by 139.178.68.195 port 47244 Jul 15 05:35:25.804321 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:25.809718 systemd[1]: sshd@12-172.237.155.110:22-139.178.68.195:47244.service: Deactivated successfully. Jul 15 05:35:25.812445 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:35:25.813477 systemd-logind[1538]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:35:25.816472 systemd-logind[1538]: Removed session 13. Jul 15 05:35:30.871990 systemd[1]: Started sshd@13-172.237.155.110:22-139.178.68.195:42252.service - OpenSSH per-connection server daemon (139.178.68.195:42252). Jul 15 05:35:31.206680 sshd[5656]: Accepted publickey for core from 139.178.68.195 port 42252 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:31.208710 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:31.214041 systemd-logind[1538]: New session 14 of user core. Jul 15 05:35:31.221217 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:35:31.499975 sshd[5659]: Connection closed by 139.178.68.195 port 42252 Jul 15 05:35:31.500425 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:31.507217 systemd-logind[1538]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:35:31.507568 systemd[1]: sshd@13-172.237.155.110:22-139.178.68.195:42252.service: Deactivated successfully. Jul 15 05:35:31.510017 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:35:31.511392 systemd-logind[1538]: Removed session 14. Jul 15 05:35:31.564721 systemd[1]: Started sshd@14-172.237.155.110:22-139.178.68.195:42266.service - OpenSSH per-connection server daemon (139.178.68.195:42266). Jul 15 05:35:31.899109 sshd[5671]: Accepted publickey for core from 139.178.68.195 port 42266 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:31.900388 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:31.906218 systemd-logind[1538]: New session 15 of user core. Jul 15 05:35:31.911183 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:35:32.339029 sshd[5674]: Connection closed by 139.178.68.195 port 42266 Jul 15 05:35:32.339468 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:32.344790 systemd-logind[1538]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:35:32.345770 systemd[1]: sshd@14-172.237.155.110:22-139.178.68.195:42266.service: Deactivated successfully. Jul 15 05:35:32.348351 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:35:32.349749 systemd-logind[1538]: Removed session 15. Jul 15 05:35:32.399267 systemd[1]: Started sshd@15-172.237.155.110:22-139.178.68.195:42272.service - OpenSSH per-connection server daemon (139.178.68.195:42272). Jul 15 05:35:32.753725 sshd[5684]: Accepted publickey for core from 139.178.68.195 port 42272 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:32.757605 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:32.765615 systemd-logind[1538]: New session 16 of user core. Jul 15 05:35:32.775282 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:35:33.581020 sshd[5687]: Connection closed by 139.178.68.195 port 42272 Jul 15 05:35:33.583029 sshd-session[5684]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:33.586741 systemd-logind[1538]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:35:33.588543 systemd[1]: sshd@15-172.237.155.110:22-139.178.68.195:42272.service: Deactivated successfully. Jul 15 05:35:33.592618 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:35:33.596145 systemd-logind[1538]: Removed session 16. Jul 15 05:35:33.644868 systemd[1]: Started sshd@16-172.237.155.110:22-139.178.68.195:42278.service - OpenSSH per-connection server daemon (139.178.68.195:42278). Jul 15 05:35:33.990191 sshd[5706]: Accepted publickey for core from 139.178.68.195 port 42278 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:33.992218 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:33.996668 systemd-logind[1538]: New session 17 of user core. Jul 15 05:35:34.003190 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:35:34.432598 sshd[5709]: Connection closed by 139.178.68.195 port 42278 Jul 15 05:35:34.434352 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:34.439590 systemd[1]: sshd@16-172.237.155.110:22-139.178.68.195:42278.service: Deactivated successfully. Jul 15 05:35:34.442053 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:35:34.444328 systemd-logind[1538]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:35:34.446363 systemd-logind[1538]: Removed session 17. Jul 15 05:35:34.493549 systemd[1]: Started sshd@17-172.237.155.110:22-139.178.68.195:42290.service - OpenSSH per-connection server daemon (139.178.68.195:42290). Jul 15 05:35:34.830941 sshd[5720]: Accepted publickey for core from 139.178.68.195 port 42290 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:34.833404 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:34.838760 systemd-logind[1538]: New session 18 of user core. Jul 15 05:35:34.844192 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:35:35.112651 containerd[1551]: time="2025-07-15T05:35:35.112274941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"4c25625d910ab0b9af196d783c014c9f16d14fefadcbf54e041808d6bcb02208\" pid:5755 exited_at:{seconds:1752557735 nanos:111381146}" Jul 15 05:35:35.143102 sshd[5723]: Connection closed by 139.178.68.195 port 42290 Jul 15 05:35:35.143554 sshd-session[5720]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:35.148008 systemd-logind[1538]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:35:35.148705 systemd[1]: sshd@17-172.237.155.110:22-139.178.68.195:42290.service: Deactivated successfully. Jul 15 05:35:35.152490 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:35:35.155621 systemd-logind[1538]: Removed session 18. Jul 15 05:35:35.160285 containerd[1551]: time="2025-07-15T05:35:35.160165729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"b5b7d5f6f22914f9a895c033e83f65362d2616eec935816de49a56086a6bf5d9\" pid:5758 exited_at:{seconds:1752557735 nanos:159615512}" Jul 15 05:35:40.204766 systemd[1]: Started sshd@18-172.237.155.110:22-139.178.68.195:39402.service - OpenSSH per-connection server daemon (139.178.68.195:39402). Jul 15 05:35:40.536560 sshd[5781]: Accepted publickey for core from 139.178.68.195 port 39402 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:40.539615 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:40.558186 systemd-logind[1538]: New session 19 of user core. Jul 15 05:35:40.563326 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:35:40.831387 sshd[5784]: Connection closed by 139.178.68.195 port 39402 Jul 15 05:35:40.833970 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:40.843189 systemd[1]: sshd@18-172.237.155.110:22-139.178.68.195:39402.service: Deactivated successfully. Jul 15 05:35:40.846953 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:35:40.848607 systemd-logind[1538]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:35:40.850444 systemd-logind[1538]: Removed session 19. Jul 15 05:35:44.805887 kubelet[2737]: E0715 05:35:44.805784 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:35:45.806159 kubelet[2737]: E0715 05:35:45.806054 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:35:45.891274 systemd[1]: Started sshd@19-172.237.155.110:22-139.178.68.195:39408.service - OpenSSH per-connection server daemon (139.178.68.195:39408). Jul 15 05:35:46.231361 sshd[5796]: Accepted publickey for core from 139.178.68.195 port 39408 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:46.233810 sshd-session[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:46.238678 systemd-logind[1538]: New session 20 of user core. Jul 15 05:35:46.243183 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:35:46.516693 sshd[5799]: Connection closed by 139.178.68.195 port 39408 Jul 15 05:35:46.517338 sshd-session[5796]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:46.522981 systemd-logind[1538]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:35:46.523595 systemd[1]: sshd@19-172.237.155.110:22-139.178.68.195:39408.service: Deactivated successfully. Jul 15 05:35:46.525605 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:35:46.527730 systemd-logind[1538]: Removed session 20. Jul 15 05:35:50.806396 kubelet[2737]: E0715 05:35:50.806361 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:35:51.589217 systemd[1]: Started sshd@20-172.237.155.110:22-139.178.68.195:52918.service - OpenSSH per-connection server daemon (139.178.68.195:52918). Jul 15 05:35:51.919410 sshd[5812]: Accepted publickey for core from 139.178.68.195 port 52918 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:51.921503 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:51.925797 systemd-logind[1538]: New session 21 of user core. Jul 15 05:35:51.932172 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:35:52.216779 sshd[5815]: Connection closed by 139.178.68.195 port 52918 Jul 15 05:35:52.217848 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:52.223315 systemd-logind[1538]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:35:52.224317 systemd[1]: sshd@20-172.237.155.110:22-139.178.68.195:52918.service: Deactivated successfully. Jul 15 05:35:52.226616 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:35:52.228724 systemd-logind[1538]: Removed session 21. Jul 15 05:35:53.155521 containerd[1551]: time="2025-07-15T05:35:53.155418604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"44f32f46e2b5512e01c5a803e53a83f59094541c62fe78fea07079099c9caea1\" pid:5838 exited_at:{seconds:1752557753 nanos:154982517}" Jul 15 05:35:54.398917 containerd[1551]: time="2025-07-15T05:35:54.398847704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54eed86ef5732bdb8222193b524de2af5571e915730455e11957d4307c4b084\" id:\"068112ce0dc4eb6340aaa4bc1201b8f62ae8954a83c7de8cb6b48356acb3575b\" pid:5860 exited_at:{seconds:1752557754 nanos:398623815}" Jul 15 05:35:57.279042 systemd[1]: Started sshd@21-172.237.155.110:22-139.178.68.195:52930.service - OpenSSH per-connection server daemon (139.178.68.195:52930). Jul 15 05:35:57.612791 sshd[5876]: Accepted publickey for core from 139.178.68.195 port 52930 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:35:57.614676 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:35:57.619962 systemd-logind[1538]: New session 22 of user core. Jul 15 05:35:57.625224 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:35:57.920300 sshd[5879]: Connection closed by 139.178.68.195 port 52930 Jul 15 05:35:57.920983 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Jul 15 05:35:57.926200 systemd-logind[1538]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:35:57.927046 systemd[1]: sshd@21-172.237.155.110:22-139.178.68.195:52930.service: Deactivated successfully. Jul 15 05:35:57.929684 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:35:57.932883 systemd-logind[1538]: Removed session 22. Jul 15 05:36:02.986165 systemd[1]: Started sshd@22-172.237.155.110:22-139.178.68.195:56642.service - OpenSSH per-connection server daemon (139.178.68.195:56642). Jul 15 05:36:03.327575 sshd[5897]: Accepted publickey for core from 139.178.68.195 port 56642 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:36:03.329580 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:36:03.334195 systemd-logind[1538]: New session 23 of user core. Jul 15 05:36:03.338181 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:36:03.625373 sshd[5900]: Connection closed by 139.178.68.195 port 56642 Jul 15 05:36:03.626372 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Jul 15 05:36:03.631113 systemd[1]: sshd@22-172.237.155.110:22-139.178.68.195:56642.service: Deactivated successfully. Jul 15 05:36:03.634122 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:36:03.635156 systemd-logind[1538]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:36:03.637316 systemd-logind[1538]: Removed session 23. Jul 15 05:36:05.062762 containerd[1551]: time="2025-07-15T05:36:05.062709272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21469a5cb312e2ab4df5f182ddeba704288da447c84f5914c73363f6c5ba5c64\" id:\"4235c4a7e3ffc38534b9135582127bd30e5adca8792abacb1347037d6729189f\" pid:5929 exited_at:{seconds:1752557765 nanos:62513293}" Jul 15 05:36:05.094797 containerd[1551]: time="2025-07-15T05:36:05.094756077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042b8064e9ff164a1ce0d0c19550817f481827fb18efd2a71b0e704ec23a02e6\" id:\"413a7362f7315261696e65a280766958a294df249a6e8c491610d16dce23f8ec\" pid:5941 exited_at:{seconds:1752557765 nanos:94402589}"