Jul 7 06:04:08.851173 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:04:08.851192 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:04:08.851201 kernel: BIOS-provided physical RAM map: Jul 7 06:04:08.851209 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jul 7 06:04:08.851214 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jul 7 06:04:08.851219 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 06:04:08.851225 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 7 06:04:08.851231 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 7 06:04:08.851236 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 06:04:08.851242 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 06:04:08.851247 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:04:08.851253 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 06:04:08.851260 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jul 7 06:04:08.851266 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:04:08.851272 kernel: NX (Execute Disable) protection: active Jul 7 06:04:08.851278 kernel: APIC: Static calls initialized Jul 7 06:04:08.851284 kernel: SMBIOS 2.8 present. Jul 7 06:04:08.851291 kernel: DMI: Linode Compute Instance, BIOS Not Specified Jul 7 06:04:08.851297 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:04:08.851303 kernel: Hypervisor detected: KVM Jul 7 06:04:08.851308 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:04:08.851314 kernel: kvm-clock: using sched offset of 5872010828 cycles Jul 7 06:04:08.851320 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:04:08.851327 kernel: tsc: Detected 2000.000 MHz processor Jul 7 06:04:08.851333 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:04:08.851339 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:04:08.851345 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jul 7 06:04:08.851353 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 06:04:08.851359 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:04:08.851365 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 7 06:04:08.851371 kernel: Using GB pages for direct mapping Jul 7 06:04:08.851377 kernel: ACPI: Early table checksum verification disabled Jul 7 06:04:08.851383 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) Jul 7 06:04:08.851389 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:08.851395 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:08.851401 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:08.851409 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 7 06:04:08.851415 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:08.851421 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:08.851427 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:08.851436 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:08.851442 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jul 7 06:04:08.851450 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jul 7 06:04:08.851457 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 7 06:04:08.851463 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jul 7 06:04:08.851469 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jul 7 06:04:08.851475 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jul 7 06:04:08.851482 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jul 7 06:04:08.851488 kernel: No NUMA configuration found Jul 7 06:04:08.851494 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jul 7 06:04:08.851502 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jul 7 06:04:08.851508 kernel: Zone ranges: Jul 7 06:04:08.851515 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:04:08.851521 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 06:04:08.851527 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jul 7 06:04:08.851533 kernel: Device empty Jul 7 06:04:08.851539 kernel: Movable zone start for each node Jul 7 06:04:08.851545 kernel: Early memory node ranges Jul 7 06:04:08.851552 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 06:04:08.851558 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 7 06:04:08.851566 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jul 7 06:04:08.851572 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jul 7 06:04:08.851578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:04:08.851585 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 06:04:08.851591 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 7 06:04:08.851597 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:04:08.851603 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:04:08.851609 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:04:08.851616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:04:08.851624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:04:08.851630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:04:08.851636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:04:08.851642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:04:08.851648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:04:08.851655 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:04:08.851661 kernel: TSC deadline timer available Jul 7 06:04:08.851667 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:04:08.851673 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:04:08.851681 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:04:08.851687 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:04:08.851693 kernel: CPU topo: Num. cores per package: 2 Jul 7 06:04:08.851699 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:04:08.851706 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:04:08.851712 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:04:08.851718 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:04:08.851724 kernel: kvm-guest: setup PV sched yield Jul 7 06:04:08.851730 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 06:04:08.851738 kernel: Booting paravirtualized kernel on KVM Jul 7 06:04:08.851768 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:04:08.851776 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:04:08.851782 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:04:08.851789 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:04:08.851795 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:04:08.851801 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:04:08.851807 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:04:08.851814 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:04:08.851824 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:04:08.851830 kernel: random: crng init done Jul 7 06:04:08.851836 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:04:08.851843 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:04:08.851849 kernel: Fallback order for Node 0: 0 Jul 7 06:04:08.851855 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 7 06:04:08.851861 kernel: Policy zone: Normal Jul 7 06:04:08.851868 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:04:08.851875 kernel: software IO TLB: area num 2. Jul 7 06:04:08.851882 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:04:08.851888 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:04:08.851894 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:04:08.851900 kernel: Dynamic Preempt: voluntary Jul 7 06:04:08.851906 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:04:08.851913 kernel: rcu: RCU event tracing is enabled. Jul 7 06:04:08.851920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:04:08.851927 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:04:08.851935 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:04:08.851941 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:04:08.851947 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:04:08.851953 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:04:08.851960 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:04:08.852111 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:04:08.852119 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:04:08.852125 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 06:04:08.852132 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:04:08.852138 kernel: Console: colour VGA+ 80x25 Jul 7 06:04:08.852145 kernel: printk: legacy console [tty0] enabled Jul 7 06:04:08.852151 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:04:08.852160 kernel: ACPI: Core revision 20240827 Jul 7 06:04:08.852166 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:04:08.852173 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:04:08.852179 kernel: x2apic enabled Jul 7 06:04:08.852186 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:04:08.852194 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:04:08.852201 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:04:08.852207 kernel: kvm-guest: setup PV IPIs Jul 7 06:04:08.852214 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:04:08.852221 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jul 7 06:04:08.852227 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jul 7 06:04:08.852234 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:04:08.852240 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:04:08.852247 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:04:08.852255 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:04:08.852262 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:04:08.852268 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:04:08.852275 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 06:04:08.852281 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:04:08.852288 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:04:08.852295 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:04:08.852302 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:04:08.852310 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:04:08.852316 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:04:08.852323 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:04:08.852330 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:04:08.852336 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 06:04:08.852343 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:04:08.852349 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jul 7 06:04:08.852356 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jul 7 06:04:08.852362 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:04:08.852371 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:04:08.852377 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:04:08.852384 kernel: landlock: Up and running. Jul 7 06:04:08.852390 kernel: SELinux: Initializing. Jul 7 06:04:08.852397 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:04:08.852403 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:04:08.852410 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jul 7 06:04:08.852416 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:04:08.852423 kernel: ... version: 0 Jul 7 06:04:08.852431 kernel: ... bit width: 48 Jul 7 06:04:08.852437 kernel: ... generic registers: 6 Jul 7 06:04:08.852444 kernel: ... value mask: 0000ffffffffffff Jul 7 06:04:08.852450 kernel: ... max period: 00007fffffffffff Jul 7 06:04:08.852457 kernel: ... fixed-purpose events: 0 Jul 7 06:04:08.852463 kernel: ... event mask: 000000000000003f Jul 7 06:04:08.852470 kernel: signal: max sigframe size: 3376 Jul 7 06:04:08.852476 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:04:08.852483 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:04:08.852491 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:04:08.852497 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:04:08.852504 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:04:08.852510 kernel: .... node #0, CPUs: #1 Jul 7 06:04:08.852517 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:04:08.852523 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jul 7 06:04:08.852530 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 227288K reserved, 0K cma-reserved) Jul 7 06:04:08.852537 kernel: devtmpfs: initialized Jul 7 06:04:08.852543 kernel: x86/mm: Memory block size: 128MB Jul 7 06:04:08.852552 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:04:08.852558 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:04:08.852565 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:04:08.852571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:04:08.852578 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:04:08.852584 kernel: audit: type=2000 audit(1751868246.173:1): state=initialized audit_enabled=0 res=1 Jul 7 06:04:08.852591 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:04:08.852597 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:04:08.852604 kernel: cpuidle: using governor menu Jul 7 06:04:08.852612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:04:08.852618 kernel: dca service started, version 1.12.1 Jul 7 06:04:08.852625 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 06:04:08.852632 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 06:04:08.852638 kernel: PCI: Using configuration type 1 for base access Jul 7 06:04:08.852645 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:04:08.852651 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:04:08.852658 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:04:08.852664 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:04:08.852673 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:04:08.852679 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:04:08.852686 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:04:08.852692 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:04:08.852699 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:04:08.852705 kernel: ACPI: Interpreter enabled Jul 7 06:04:08.852712 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 06:04:08.852718 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:04:08.852725 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:04:08.852733 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:04:08.852739 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:04:08.852757 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:04:08.853623 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:04:08.853738 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:04:08.853898 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:04:08.853909 kernel: PCI host bridge to bus 0000:00 Jul 7 06:04:08.854021 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:04:08.854119 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:04:08.854214 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:04:08.854308 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 7 06:04:08.854401 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 06:04:08.854494 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jul 7 06:04:08.854588 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:04:08.854734 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:04:08.856567 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:04:08.856683 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 7 06:04:08.856818 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 7 06:04:08.856926 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 7 06:04:08.857029 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:04:08.857142 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:04:08.857275 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jul 7 06:04:08.857662 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 7 06:04:08.857844 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 7 06:04:08.857968 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:04:08.858075 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 7 06:04:08.858179 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 7 06:04:08.858287 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 7 06:04:08.858390 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 7 06:04:08.858502 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:04:08.858605 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:04:08.858801 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:04:08.858925 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jul 7 06:04:08.859030 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jul 7 06:04:08.859145 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:04:08.859250 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 06:04:08.859259 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:04:08.859266 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:04:08.859273 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:04:08.859279 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:04:08.859286 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:04:08.859293 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:04:08.859302 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:04:08.859308 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:04:08.859315 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:04:08.859321 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:04:08.859328 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:04:08.859334 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:04:08.859341 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:04:08.859348 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:04:08.859364 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:04:08.859388 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:04:08.859401 kernel: iommu: Default domain type: Translated Jul 7 06:04:08.859407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:04:08.859414 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:04:08.859421 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:04:08.859427 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jul 7 06:04:08.859434 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 7 06:04:08.859540 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:04:08.859649 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:04:08.860504 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:04:08.860518 kernel: vgaarb: loaded Jul 7 06:04:08.860536 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:04:08.860551 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:04:08.860558 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:04:08.860565 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:04:08.860572 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:04:08.860579 kernel: pnp: PnP ACPI init Jul 7 06:04:08.860713 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 06:04:08.860724 kernel: pnp: PnP ACPI: found 5 devices Jul 7 06:04:08.860731 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:04:08.860737 kernel: NET: Registered PF_INET protocol family Jul 7 06:04:08.860759 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:04:08.860767 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:04:08.860773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:04:08.860780 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:04:08.860790 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:04:08.860796 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:04:08.860803 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:04:08.860809 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:04:08.860816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:04:08.860823 kernel: NET: Registered PF_XDP protocol family Jul 7 06:04:08.861116 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:04:08.861212 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:04:08.861307 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:04:08.861404 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 7 06:04:08.861498 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 06:04:08.861591 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jul 7 06:04:08.861600 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:04:08.861607 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 06:04:08.861614 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jul 7 06:04:08.861620 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jul 7 06:04:08.861627 kernel: Initialise system trusted keyrings Jul 7 06:04:08.861636 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:04:08.861642 kernel: Key type asymmetric registered Jul 7 06:04:08.861649 kernel: Asymmetric key parser 'x509' registered Jul 7 06:04:08.861656 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:04:08.861662 kernel: io scheduler mq-deadline registered Jul 7 06:04:08.861669 kernel: io scheduler kyber registered Jul 7 06:04:08.861675 kernel: io scheduler bfq registered Jul 7 06:04:08.861682 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:04:08.861689 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:04:08.861698 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:04:08.861704 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:04:08.861722 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:04:08.861759 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:04:08.861766 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:04:08.861952 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:04:08.862224 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 06:04:08.862236 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:04:08.862340 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 06:04:08.866788 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T06:04:08 UTC (1751868248) Jul 7 06:04:08.866912 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 06:04:08.866923 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:04:08.867119 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:04:08.867126 kernel: Segment Routing with IPv6 Jul 7 06:04:08.867133 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:04:08.867139 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:04:08.867146 kernel: Key type dns_resolver registered Jul 7 06:04:08.867155 kernel: IPI shorthand broadcast: enabled Jul 7 06:04:08.867162 kernel: sched_clock: Marking stable (2779003914, 212678488)->(3028896365, -37213963) Jul 7 06:04:08.867169 kernel: registered taskstats version 1 Jul 7 06:04:08.867175 kernel: Loading compiled-in X.509 certificates Jul 7 06:04:08.867182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:04:08.867188 kernel: Demotion targets for Node 0: null Jul 7 06:04:08.867195 kernel: Key type .fscrypt registered Jul 7 06:04:08.867201 kernel: Key type fscrypt-provisioning registered Jul 7 06:04:08.867208 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:04:08.867217 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:04:08.867223 kernel: ima: No architecture policies found Jul 7 06:04:08.867230 kernel: clk: Disabling unused clocks Jul 7 06:04:08.867236 kernel: Warning: unable to open an initial console. Jul 7 06:04:08.867243 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:04:08.867250 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:04:08.867256 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:04:08.867263 kernel: Run /init as init process Jul 7 06:04:08.867269 kernel: with arguments: Jul 7 06:04:08.867278 kernel: /init Jul 7 06:04:08.867284 kernel: with environment: Jul 7 06:04:08.867291 kernel: HOME=/ Jul 7 06:04:08.867297 kernel: TERM=linux Jul 7 06:04:08.867303 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:04:08.867323 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:04:08.867334 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:04:08.867344 systemd[1]: Detected virtualization kvm. Jul 7 06:04:08.867352 systemd[1]: Detected architecture x86-64. Jul 7 06:04:08.867359 systemd[1]: Running in initrd. Jul 7 06:04:08.867366 systemd[1]: No hostname configured, using default hostname. Jul 7 06:04:08.867373 systemd[1]: Hostname set to . Jul 7 06:04:08.867381 systemd[1]: Initializing machine ID from random generator. Jul 7 06:04:08.867388 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:04:08.867395 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:04:08.867403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:04:08.867413 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:04:08.867422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:04:08.867429 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:04:08.867438 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:04:08.867446 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:04:08.867453 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:04:08.867462 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:04:08.867470 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:04:08.867477 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:04:08.867484 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:04:08.867492 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:04:08.867499 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:04:08.867507 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:04:08.867514 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:04:08.867521 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:04:08.867530 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:04:08.867538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:04:08.867545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:04:08.867552 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:04:08.867560 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:04:08.867567 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:04:08.867577 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:04:08.867584 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:04:08.867592 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:04:08.867599 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:04:08.867607 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:04:08.867614 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:04:08.867622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:04:08.867629 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:04:08.867657 systemd-journald[207]: Collecting audit messages is disabled. Jul 7 06:04:08.867677 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:04:08.867685 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:04:08.867693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:04:08.867701 systemd-journald[207]: Journal started Jul 7 06:04:08.867717 systemd-journald[207]: Runtime Journal (/run/log/journal/a4383b9efa184ad7acf1ea22e909cf58) is 8M, max 78.5M, 70.5M free. Jul 7 06:04:08.860595 systemd-modules-load[208]: Inserted module 'overlay' Jul 7 06:04:08.879520 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:04:08.891966 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:04:08.893764 kernel: Bridge firewalling registered Jul 7 06:04:08.894775 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 7 06:04:08.929767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:04:08.951521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:08.952643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:04:08.956703 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:04:08.958889 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:04:08.961859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:04:08.967839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:04:08.972738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:04:08.983271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:04:08.987794 systemd-tmpfiles[229]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:04:08.987888 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:04:08.990923 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:04:08.996061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:04:08.999838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:04:09.007202 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:04:09.039710 systemd-resolved[247]: Positive Trust Anchors: Jul 7 06:04:09.040513 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:04:09.040541 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:04:09.046336 systemd-resolved[247]: Defaulting to hostname 'linux'. Jul 7 06:04:09.047318 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:04:09.047938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:04:09.084788 kernel: SCSI subsystem initialized Jul 7 06:04:09.092766 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:04:09.103775 kernel: iscsi: registered transport (tcp) Jul 7 06:04:09.123312 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:04:09.123349 kernel: QLogic iSCSI HBA Driver Jul 7 06:04:09.139355 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:04:09.152021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:04:09.154203 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:04:09.191962 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:04:09.193578 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:04:09.241770 kernel: raid6: avx2x4 gen() 28996 MB/s Jul 7 06:04:09.259766 kernel: raid6: avx2x2 gen() 30376 MB/s Jul 7 06:04:09.278305 kernel: raid6: avx2x1 gen() 22980 MB/s Jul 7 06:04:09.278324 kernel: raid6: using algorithm avx2x2 gen() 30376 MB/s Jul 7 06:04:09.297327 kernel: raid6: .... xor() 29200 MB/s, rmw enabled Jul 7 06:04:09.297344 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:04:09.316965 kernel: xor: automatically using best checksumming function avx Jul 7 06:04:09.443775 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:04:09.450484 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:04:09.452341 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:04:09.476308 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 7 06:04:09.481300 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:04:09.484124 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:04:09.515270 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Jul 7 06:04:09.537828 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:04:09.539400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:04:09.602275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:04:09.607067 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:04:09.769782 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:04:09.783019 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jul 7 06:04:09.794113 kernel: scsi host0: Virtio SCSI HBA Jul 7 06:04:09.816786 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 7 06:04:09.819847 kernel: AES CTR mode by8 optimization enabled Jul 7 06:04:09.826872 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:04:09.827051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:09.829376 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:04:09.847634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:04:09.860733 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:04:09.872141 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:04:09.873028 kernel: libata version 3.00 loaded. Jul 7 06:04:09.882741 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 7 06:04:09.886672 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jul 7 06:04:09.886845 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 06:04:09.886981 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 7 06:04:09.890211 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 06:04:09.895816 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:04:09.895840 kernel: GPT:9289727 != 167739391 Jul 7 06:04:09.895851 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:04:09.895861 kernel: GPT:9289727 != 167739391 Jul 7 06:04:09.895869 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:04:09.895879 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:04:09.896990 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 06:04:09.902771 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:04:09.903982 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:04:09.905914 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:04:09.906281 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:04:09.906412 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:04:09.908764 kernel: scsi host1: ahci Jul 7 06:04:09.909809 kernel: scsi host2: ahci Jul 7 06:04:09.910858 kernel: scsi host3: ahci Jul 7 06:04:09.911775 kernel: scsi host4: ahci Jul 7 06:04:09.912132 kernel: scsi host5: ahci Jul 7 06:04:09.913038 kernel: scsi host6: ahci Jul 7 06:04:09.913203 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Jul 7 06:04:09.913215 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Jul 7 06:04:09.913229 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Jul 7 06:04:09.913238 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Jul 7 06:04:09.913247 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Jul 7 06:04:09.913255 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Jul 7 06:04:09.951689 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 7 06:04:10.021626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:10.047188 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 7 06:04:10.054053 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 7 06:04:10.054631 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 7 06:04:10.063612 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 06:04:10.065962 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:04:10.081780 disk-uuid[621]: Primary Header is updated. Jul 7 06:04:10.081780 disk-uuid[621]: Secondary Entries is updated. Jul 7 06:04:10.081780 disk-uuid[621]: Secondary Header is updated. Jul 7 06:04:10.089781 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:04:10.227634 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 06:04:10.227673 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:04:10.227684 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:04:10.227762 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:04:10.230908 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:04:10.231768 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:04:10.246559 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:04:10.248413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:04:10.249688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:04:10.250488 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:04:10.252851 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:04:10.271365 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:04:11.109297 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:04:11.109358 disk-uuid[622]: The operation has completed successfully. Jul 7 06:04:11.167804 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:04:11.167952 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:04:11.196910 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:04:11.217873 sh[649]: Success Jul 7 06:04:11.237241 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:04:11.237267 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:04:11.237874 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:04:11.250778 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:04:11.300098 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:04:11.305828 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:04:11.320629 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:04:11.335302 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:04:11.335323 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (661) Jul 7 06:04:11.342890 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:04:11.342930 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:04:11.342942 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:04:11.352370 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:04:11.353446 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:04:11.354368 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:04:11.355267 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:04:11.358233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:04:11.386800 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (695) Jul 7 06:04:11.392562 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:04:11.392592 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:04:11.392603 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:04:11.404769 kernel: BTRFS info (device sda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:04:11.405632 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:04:11.408071 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:04:11.495271 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:04:11.501779 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:04:11.521028 ignition[760]: Ignition 2.21.0 Jul 7 06:04:11.521051 ignition[760]: Stage: fetch-offline Jul 7 06:04:11.521089 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:11.521099 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:04:11.525467 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:04:11.521194 ignition[760]: parsed url from cmdline: "" Jul 7 06:04:11.521201 ignition[760]: no config URL provided Jul 7 06:04:11.521205 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:04:11.521213 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:04:11.521218 ignition[760]: failed to fetch config: resource requires networking Jul 7 06:04:11.522921 ignition[760]: Ignition finished successfully Jul 7 06:04:11.545079 systemd-networkd[835]: lo: Link UP Jul 7 06:04:11.545095 systemd-networkd[835]: lo: Gained carrier Jul 7 06:04:11.547944 systemd-networkd[835]: Enumeration completed Jul 7 06:04:11.548258 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:04:11.548653 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:04:11.549392 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:04:11.549788 systemd[1]: Reached target network.target - Network. Jul 7 06:04:11.552226 systemd-networkd[835]: eth0: Link UP Jul 7 06:04:11.552229 systemd-networkd[835]: eth0: Gained carrier Jul 7 06:04:11.552244 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:04:11.553040 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:04:11.574394 ignition[840]: Ignition 2.21.0 Jul 7 06:04:11.574405 ignition[840]: Stage: fetch Jul 7 06:04:11.574523 ignition[840]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:11.574533 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:04:11.574609 ignition[840]: parsed url from cmdline: "" Jul 7 06:04:11.574612 ignition[840]: no config URL provided Jul 7 06:04:11.574617 ignition[840]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:04:11.574625 ignition[840]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:04:11.574670 ignition[840]: PUT http://169.254.169.254/v1/token: attempt #1 Jul 7 06:04:11.574875 ignition[840]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 7 06:04:11.775441 ignition[840]: PUT http://169.254.169.254/v1/token: attempt #2 Jul 7 06:04:11.776423 ignition[840]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 7 06:04:12.013821 systemd-networkd[835]: eth0: DHCPv4 address 172.236.119.245/24, gateway 172.236.119.1 acquired from 23.205.167.178 Jul 7 06:04:12.177115 ignition[840]: PUT http://169.254.169.254/v1/token: attempt #3 Jul 7 06:04:12.274554 ignition[840]: PUT result: OK Jul 7 06:04:12.274599 ignition[840]: GET http://169.254.169.254/v1/user-data: attempt #1 Jul 7 06:04:12.389500 ignition[840]: GET result: OK Jul 7 06:04:12.389652 ignition[840]: parsing config with SHA512: 7f74a20dfca9fb4dbfedeb0f538a6afe813e00eff0ac84a72286a970123c63868883a5814ce632639e84ab1ebb90b9a2f1d870ce1fb13ccec17224cc64d1ed81 Jul 7 06:04:12.394542 unknown[840]: fetched base config from "system" Jul 7 06:04:12.394560 unknown[840]: fetched base config from "system" Jul 7 06:04:12.395427 ignition[840]: fetch: fetch complete Jul 7 06:04:12.394568 unknown[840]: fetched user config from "akamai" Jul 7 06:04:12.395432 ignition[840]: fetch: fetch passed Jul 7 06:04:12.395476 ignition[840]: Ignition finished successfully Jul 7 06:04:12.398433 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:04:12.399876 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:04:12.423734 ignition[848]: Ignition 2.21.0 Jul 7 06:04:12.434247 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:04:12.423759 ignition[848]: Stage: kargs Jul 7 06:04:12.423869 ignition[848]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:12.423879 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:04:12.447187 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:04:12.426594 ignition[848]: kargs: kargs passed Jul 7 06:04:12.426633 ignition[848]: Ignition finished successfully Jul 7 06:04:12.479570 ignition[855]: Ignition 2.21.0 Jul 7 06:04:12.479583 ignition[855]: Stage: disks Jul 7 06:04:12.479724 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:12.479734 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:04:12.480397 ignition[855]: disks: disks passed Jul 7 06:04:12.482097 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:04:12.480432 ignition[855]: Ignition finished successfully Jul 7 06:04:12.483043 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:04:12.483936 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:04:12.484963 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:04:12.486093 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:04:12.487828 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:04:12.489689 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:04:12.515542 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:04:12.518410 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:04:12.521422 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:04:12.625776 kernel: EXT4-fs (sda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:04:12.626035 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:04:12.626904 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:04:12.629119 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:04:12.632821 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:04:12.634173 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:04:12.634210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:04:12.634231 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:04:12.638172 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:04:12.640216 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:04:12.650776 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (871) Jul 7 06:04:12.653832 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:04:12.653853 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:04:12.658016 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:04:12.662871 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:04:12.692526 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:04:12.698147 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:04:12.701782 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:04:12.706216 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:04:12.783958 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:04:12.785795 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:04:12.787457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:04:12.798363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:04:12.801770 kernel: BTRFS info (device sda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:04:12.814847 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:04:12.822803 ignition[985]: INFO : Ignition 2.21.0 Jul 7 06:04:12.822803 ignition[985]: INFO : Stage: mount Jul 7 06:04:12.823998 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:12.823998 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:04:12.823998 ignition[985]: INFO : mount: mount passed Jul 7 06:04:12.823998 ignition[985]: INFO : Ignition finished successfully Jul 7 06:04:12.824787 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:04:12.827128 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:04:13.432905 systemd-networkd[835]: eth0: Gained IPv6LL Jul 7 06:04:13.627932 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:04:13.647805 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (996) Jul 7 06:04:13.652069 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:04:13.652089 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:04:13.652100 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:04:13.656953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:04:13.682331 ignition[1012]: INFO : Ignition 2.21.0 Jul 7 06:04:13.682331 ignition[1012]: INFO : Stage: files Jul 7 06:04:13.684152 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:13.684152 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:04:13.684152 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:04:13.686307 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:04:13.686307 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:04:13.687830 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:04:13.687830 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:04:13.687830 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:04:13.686908 unknown[1012]: wrote ssh authorized keys file for user: core Jul 7 06:04:13.690664 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:04:13.690664 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 06:04:13.915628 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:04:14.260935 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:04:14.262141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:04:14.269141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:04:14.269141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:04:14.269141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:04:14.269141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:04:14.269141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:04:14.269141 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 06:04:14.794130 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:04:15.253476 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:04:15.253476 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:04:15.258219 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:04:15.260529 ignition[1012]: INFO : files: files passed Jul 7 06:04:15.260529 ignition[1012]: INFO : Ignition finished successfully Jul 7 06:04:15.264423 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:04:15.268866 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:04:15.273866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:04:15.280984 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:04:15.281709 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:04:15.288649 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:04:15.290417 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:04:15.292025 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:04:15.293355 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:04:15.294294 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:04:15.296012 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:04:15.339953 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:04:15.340093 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:04:15.341378 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:04:15.342392 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:04:15.343612 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:04:15.344312 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:04:15.380825 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:04:15.382489 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:04:15.394518 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:04:15.395173 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:04:15.396405 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:04:15.397591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:04:15.397722 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:04:15.398975 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:04:15.399737 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:04:15.400917 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:04:15.401925 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:04:15.403025 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:04:15.404162 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:04:15.405444 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:04:15.406579 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:04:15.407868 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:04:15.408987 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:04:15.410223 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:04:15.411278 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:04:15.411371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:04:15.412672 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:04:15.413450 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:04:15.414415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:04:15.414732 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:04:15.415666 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:04:15.415814 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:04:15.417333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:04:15.417476 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:04:15.418150 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:04:15.418243 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:04:15.420825 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:04:15.423905 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:04:15.425475 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:04:15.425583 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:04:15.426679 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:04:15.426835 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:04:15.433468 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:04:15.433560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:04:15.448340 ignition[1067]: INFO : Ignition 2.21.0 Jul 7 06:04:15.448340 ignition[1067]: INFO : Stage: umount Jul 7 06:04:15.451479 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:15.451479 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:04:15.451479 ignition[1067]: INFO : umount: umount passed Jul 7 06:04:15.451479 ignition[1067]: INFO : Ignition finished successfully Jul 7 06:04:15.450980 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:04:15.451104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:04:15.452140 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:04:15.452185 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:04:15.453561 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:04:15.453611 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:04:15.454832 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:04:15.454879 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:04:15.457082 systemd[1]: Stopped target network.target - Network. Jul 7 06:04:15.458081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:04:15.458133 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:04:15.484978 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:04:15.485674 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:04:15.485797 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:04:15.486349 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:04:15.486832 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:04:15.487343 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:04:15.487385 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:04:15.487933 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:04:15.487970 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:04:15.488520 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:04:15.488575 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:04:15.489542 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:04:15.489585 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:04:15.490661 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:04:15.491786 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:04:15.494006 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:04:15.494553 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:04:15.494653 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:04:15.495995 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:04:15.496068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:04:15.499512 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:04:15.499628 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:04:15.501836 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:04:15.502043 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:04:15.502143 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:04:15.505237 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:04:15.505738 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:04:15.506616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:04:15.506653 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:04:15.508407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:04:15.510054 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:04:15.510103 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:04:15.511664 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:04:15.511712 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:04:15.513878 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:04:15.513924 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:04:15.515230 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:04:15.515277 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:04:15.517094 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:04:15.520045 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:04:15.520108 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:04:15.535241 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:04:15.536408 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:04:15.537947 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:04:15.538037 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:04:15.539347 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:04:15.539405 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:04:15.540298 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:04:15.540333 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:04:15.541282 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:04:15.541329 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:04:15.542819 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:04:15.542863 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:04:15.543820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:04:15.543872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:04:15.549856 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:04:15.550788 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:04:15.550836 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:04:15.552878 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:04:15.552927 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:04:15.554465 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:04:15.554510 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:04:15.555834 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:04:15.555879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:04:15.557167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:04:15.557211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:15.560316 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:04:15.560369 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 06:04:15.560409 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:04:15.560450 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:04:15.566672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:04:15.566784 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:04:15.568258 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:04:15.569972 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:04:15.596088 systemd[1]: Switching root. Jul 7 06:04:15.628259 systemd-journald[207]: Journal stopped Jul 7 06:04:16.676200 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 7 06:04:16.676224 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:04:16.676235 kernel: SELinux: policy capability open_perms=1 Jul 7 06:04:16.676247 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:04:16.676255 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:04:16.676264 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:04:16.676273 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:04:16.676282 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:04:16.676290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:04:16.676299 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:04:16.676309 kernel: audit: type=1403 audit(1751868255.769:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:04:16.676319 systemd[1]: Successfully loaded SELinux policy in 74.328ms. Jul 7 06:04:16.676329 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.286ms. Jul 7 06:04:16.676340 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:04:16.676350 systemd[1]: Detected virtualization kvm. Jul 7 06:04:16.676361 systemd[1]: Detected architecture x86-64. Jul 7 06:04:16.676370 systemd[1]: Detected first boot. Jul 7 06:04:16.676380 systemd[1]: Initializing machine ID from random generator. Jul 7 06:04:16.676389 zram_generator::config[1110]: No configuration found. Jul 7 06:04:16.676399 kernel: Guest personality initialized and is inactive Jul 7 06:04:16.676408 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:04:16.676417 kernel: Initialized host personality Jul 7 06:04:16.676428 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:04:16.676437 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:04:16.676447 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:04:16.676457 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:04:16.676466 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:04:16.676475 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:04:16.676485 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:04:16.676496 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:04:16.676506 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:04:16.676515 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:04:16.676525 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:04:16.676534 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:04:16.676544 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:04:16.676553 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:04:16.676565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:04:16.676574 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:04:16.676584 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:04:16.676594 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:04:16.676606 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:04:16.676616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:04:16.676626 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:04:16.676636 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:04:16.676647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:04:16.676657 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:04:16.676667 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:04:16.676676 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:04:16.676686 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:04:16.676695 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:04:16.676705 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:04:16.676715 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:04:16.676726 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:04:16.676736 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:04:16.676757 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:04:16.676767 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:04:16.676777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:04:16.676789 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:04:16.676799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:04:16.676809 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:04:16.676818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:04:16.676828 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:04:16.676838 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:04:16.676848 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:16.676857 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:04:16.676869 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:04:16.676879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:04:16.676889 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:04:16.676899 systemd[1]: Reached target machines.target - Containers. Jul 7 06:04:16.676909 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:04:16.676919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:04:16.676928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:04:16.676938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:04:16.676950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:04:16.676960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:04:16.676969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:04:16.676979 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:04:16.676989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:04:16.676999 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:04:16.677008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:04:16.677018 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:04:16.677028 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:04:16.677039 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:04:16.677049 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:04:16.677059 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:04:16.677069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:04:16.677079 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:04:16.677089 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:04:16.677098 kernel: loop: module loaded Jul 7 06:04:16.677108 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:04:16.677120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:04:16.677129 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:04:16.677139 systemd[1]: Stopped verity-setup.service. Jul 7 06:04:16.677150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:16.677160 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:04:16.677170 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:04:16.677179 kernel: fuse: init (API version 7.41) Jul 7 06:04:16.677188 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:04:16.677200 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:04:16.677210 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:04:16.677219 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:04:16.677229 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:04:16.677258 systemd-journald[1201]: Collecting audit messages is disabled. Jul 7 06:04:16.677280 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:04:16.677290 kernel: ACPI: bus type drm_connector registered Jul 7 06:04:16.677300 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:04:16.677309 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:04:16.677320 systemd-journald[1201]: Journal started Jul 7 06:04:16.677338 systemd-journald[1201]: Runtime Journal (/run/log/journal/6eeab7460a5049a8ad5c0ec9f73b0c27) is 8M, max 78.5M, 70.5M free. Jul 7 06:04:16.339742 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:04:16.353332 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 06:04:16.353863 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:04:16.681831 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:04:16.684628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:04:16.684875 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:04:16.685669 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:04:16.685883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:04:16.686684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:04:16.686901 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:04:16.687698 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:04:16.688031 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:04:16.688881 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:04:16.689071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:04:16.690208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:04:16.691080 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:04:16.692043 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:04:16.692940 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:04:16.705695 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:04:16.709827 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:04:16.713698 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:04:16.714481 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:04:16.714557 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:04:16.716289 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:04:16.725047 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:04:16.728380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:04:16.730085 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:04:16.733999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:04:16.735819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:04:16.736952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:04:16.738515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:04:16.744361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:04:16.747083 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:04:16.748605 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:04:16.750673 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:04:16.753911 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:04:16.781381 systemd-journald[1201]: Time spent on flushing to /var/log/journal/6eeab7460a5049a8ad5c0ec9f73b0c27 is 54.767ms for 998 entries. Jul 7 06:04:16.781381 systemd-journald[1201]: System Journal (/var/log/journal/6eeab7460a5049a8ad5c0ec9f73b0c27) is 8M, max 195.6M, 187.6M free. Jul 7 06:04:16.860508 systemd-journald[1201]: Received client request to flush runtime journal. Jul 7 06:04:16.860558 kernel: loop0: detected capacity change from 0 to 229808 Jul 7 06:04:16.860671 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:04:16.787800 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:04:16.790678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:04:16.800294 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:04:16.818196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:04:16.853380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:04:16.858778 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jul 7 06:04:16.858860 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jul 7 06:04:16.865437 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:04:16.867094 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:04:16.874607 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:04:16.877631 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:04:16.881823 kernel: loop1: detected capacity change from 0 to 8 Jul 7 06:04:16.903977 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:04:16.939689 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:04:16.944427 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:04:16.945835 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:04:16.981538 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jul 7 06:04:16.981927 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jul 7 06:04:16.991481 kernel: loop4: detected capacity change from 0 to 229808 Jul 7 06:04:16.991605 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:04:17.019825 kernel: loop5: detected capacity change from 0 to 8 Jul 7 06:04:17.023784 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 06:04:17.038160 kernel: loop7: detected capacity change from 0 to 146240 Jul 7 06:04:17.054913 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jul 7 06:04:17.055650 (sd-merge)[1261]: Merged extensions into '/usr'. Jul 7 06:04:17.060555 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:04:17.060643 systemd[1]: Reloading... Jul 7 06:04:17.158805 zram_generator::config[1294]: No configuration found. Jul 7 06:04:17.252735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:17.329082 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:04:17.329668 systemd[1]: Reloading finished in 268 ms. Jul 7 06:04:17.331576 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:04:17.344268 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:04:17.345529 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:04:17.357864 systemd[1]: Starting ensure-sysext.service... Jul 7 06:04:17.359878 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:04:17.384333 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:04:17.384372 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:04:17.384632 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:04:17.385397 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:04:17.386451 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:04:17.386779 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 7 06:04:17.387052 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 7 06:04:17.388281 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:04:17.388293 systemd[1]: Reloading... Jul 7 06:04:17.395339 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:04:17.395437 systemd-tmpfiles[1332]: Skipping /boot Jul 7 06:04:17.420553 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:04:17.420570 systemd-tmpfiles[1332]: Skipping /boot Jul 7 06:04:17.483788 zram_generator::config[1359]: No configuration found. Jul 7 06:04:17.579430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:17.649575 systemd[1]: Reloading finished in 260 ms. Jul 7 06:04:17.669869 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:04:17.684857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:04:17.693410 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:04:17.697415 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:04:17.703937 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:04:17.709461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:04:17.714958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:04:17.718959 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:04:17.723329 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:17.724943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:04:17.726473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:04:17.733542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:04:17.740036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:04:17.740835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:04:17.741072 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:04:17.741149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:17.749412 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:04:17.752921 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:04:17.754257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:04:17.754735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:04:17.766193 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:04:17.768017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:17.768413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:04:17.775566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:04:17.776886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:04:17.776985 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:04:17.777062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:17.784651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:17.785246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:04:17.794861 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:04:17.795497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:04:17.795622 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:04:17.795792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:04:17.797378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:04:17.797838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:04:17.799382 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:04:17.800077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:04:17.806897 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:04:17.807645 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Jul 7 06:04:17.809512 systemd[1]: Finished ensure-sysext.service. Jul 7 06:04:17.818647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:04:17.822888 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:04:17.826248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:04:17.826463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:04:17.827562 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:04:17.834566 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:04:17.838525 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:04:17.838729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:04:17.851424 augenrules[1446]: No rules Jul 7 06:04:17.852332 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:04:17.854393 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:04:17.855586 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:04:17.856169 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:04:17.860876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:04:17.865603 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:04:17.889209 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:04:18.000020 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:04:18.082800 systemd-networkd[1456]: lo: Link UP Jul 7 06:04:18.083223 systemd-networkd[1456]: lo: Gained carrier Jul 7 06:04:18.084863 systemd-networkd[1456]: Enumeration completed Jul 7 06:04:18.085032 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:04:18.085992 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:04:18.086837 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:04:18.088016 systemd-networkd[1456]: eth0: Link UP Jul 7 06:04:18.088250 systemd-networkd[1456]: eth0: Gained carrier Jul 7 06:04:18.088307 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:04:18.091891 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:04:18.098126 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:04:18.129696 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:04:18.172629 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:04:18.173744 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:04:18.176766 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:04:18.197782 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:04:18.200287 systemd-resolved[1407]: Positive Trust Anchors: Jul 7 06:04:18.200486 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:04:18.200515 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:04:18.206286 systemd-resolved[1407]: Defaulting to hostname 'linux'. Jul 7 06:04:18.208277 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:04:18.209344 systemd[1]: Reached target network.target - Network. Jul 7 06:04:18.209857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:04:18.210408 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:04:18.211026 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:04:18.211606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:04:18.212183 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:04:18.213006 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:04:18.213737 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:04:18.215805 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:04:18.216378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:04:18.216408 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:04:18.216912 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:04:18.220005 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:04:18.222086 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:04:18.227087 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:04:18.228827 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:04:18.229958 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:04:18.230226 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:04:18.238131 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:04:18.239183 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:04:18.240633 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:04:18.246672 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:04:18.247201 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:04:18.247727 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:04:18.247788 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:04:18.250818 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:04:18.255166 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:04:18.255386 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:04:18.255957 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:04:18.283723 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:04:18.285836 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:04:18.290076 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:04:18.293428 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:04:18.293994 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:04:18.297614 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:04:18.306086 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:04:18.315265 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:04:18.321296 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:04:18.331094 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:04:18.338543 jq[1511]: false Jul 7 06:04:18.342821 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:04:18.346122 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:04:18.347366 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:04:18.348053 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:04:18.351912 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:04:18.361042 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:04:18.362040 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:04:18.362279 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:04:18.362574 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:04:18.364861 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:04:18.373129 jq[1531]: true Jul 7 06:04:18.402898 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:04:18.403323 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:04:18.424337 tar[1543]: linux-amd64/LICENSE Jul 7 06:04:18.425190 tar[1543]: linux-amd64/helm Jul 7 06:04:18.426087 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:04:18.433194 dbus-daemon[1508]: [system] SELinux support is enabled Jul 7 06:04:18.439002 update_engine[1530]: I20250707 06:04:18.435654 1530 main.cc:92] Flatcar Update Engine starting Jul 7 06:04:18.433319 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:04:18.437022 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:04:18.437048 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:04:18.438257 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:04:18.438275 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:04:18.464664 jq[1542]: true Jul 7 06:04:18.468292 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache Jul 7 06:04:18.468306 oslogin_cache_refresh[1514]: Refreshing passwd entry cache Jul 7 06:04:18.471299 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:04:18.472938 update_engine[1530]: I20250707 06:04:18.472891 1530 update_check_scheduler.cc:74] Next update check in 2m29s Jul 7 06:04:18.476932 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting Jul 7 06:04:18.476932 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:04:18.476923 oslogin_cache_refresh[1514]: Failure getting users, quitting Jul 7 06:04:18.477025 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache Jul 7 06:04:18.476938 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:04:18.476975 oslogin_cache_refresh[1514]: Refreshing group entry cache Jul 7 06:04:18.477463 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting Jul 7 06:04:18.477463 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:04:18.477451 oslogin_cache_refresh[1514]: Failure getting groups, quitting Jul 7 06:04:18.477460 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:04:18.495371 coreos-metadata[1507]: Jul 07 06:04:18.495 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 7 06:04:18.498509 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:04:18.500192 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:04:18.500436 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:04:18.508191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 06:04:18.521827 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:04:18.523711 extend-filesystems[1512]: Found /dev/sda6 Jul 7 06:04:18.526854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:04:18.536097 extend-filesystems[1512]: Found /dev/sda9 Jul 7 06:04:18.553157 extend-filesystems[1512]: Checking size of /dev/sda9 Jul 7 06:04:18.572607 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:04:18.573477 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:04:18.578857 systemd-networkd[1456]: eth0: DHCPv4 address 172.236.119.245/24, gateway 172.236.119.1 acquired from 23.205.167.178 Jul 7 06:04:18.580001 systemd[1]: Starting sshkeys.service... Jul 7 06:04:18.580422 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 7 06:04:18.582241 dbus-daemon[1508]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1456 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 06:04:18.587908 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 06:04:18.607177 extend-filesystems[1512]: Resized partition /dev/sda9 Jul 7 06:04:18.615128 extend-filesystems[1594]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:04:18.616833 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 06:04:18.621254 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 06:04:19.160495 systemd-resolved[1407]: Clock change detected. Flushing caches. Jul 7 06:04:19.160862 systemd-timesyncd[1439]: Contacted time server 64.246.132.14:123 (0.flatcar.pool.ntp.org). Jul 7 06:04:19.160909 systemd-timesyncd[1439]: Initial clock synchronization to Mon 2025-07-07 06:04:19.160408 UTC. Jul 7 06:04:19.166729 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jul 7 06:04:19.192406 systemd-logind[1528]: New seat seat0. Jul 7 06:04:19.193644 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:04:19.222618 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:04:19.263295 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:04:19.282871 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:04:19.339488 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:04:19.369794 coreos-metadata[1595]: Jul 07 06:04:19.369 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 7 06:04:19.385185 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:04:19.385218 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:04:19.434260 containerd[1544]: time="2025-07-07T06:04:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:04:19.465724 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jul 7 06:04:19.475108 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:04:19.478942 containerd[1544]: time="2025-07-07T06:04:19.478133621Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:04:19.483179 extend-filesystems[1594]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 06:04:19.483179 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 10 Jul 7 06:04:19.483179 extend-filesystems[1594]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jul 7 06:04:19.662851 coreos-metadata[1595]: Jul 07 06:04:19.483 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jul 7 06:04:19.662851 coreos-metadata[1595]: Jul 07 06:04:19.623 INFO Fetch successful Jul 7 06:04:19.573802 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:04:19.662946 extend-filesystems[1512]: Resized filesystem in /dev/sda9 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521138779Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.65µs" Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521162349Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521181659Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521503189Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521523879Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521546549Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521608779Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.521619439Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.527469606Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.527502576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.527518806Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:04:19.663661 containerd[1544]: time="2025-07-07T06:04:19.527527016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:04:19.574076 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.527651286Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.527903386Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.528010436Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.528021476Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.528053296Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.528264916Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.528329336Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.532331514Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.532375824Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.532391884Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.532406404Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.532447663Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.532459823Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:04:19.667474 containerd[1544]: time="2025-07-07T06:04:19.532471913Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532482163Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532491043Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532499103Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532506433Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532526863Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532642213Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532661103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532674533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532683903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532712163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532742673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532753613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532764473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532774403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:04:19.667690 containerd[1544]: time="2025-07-07T06:04:19.532783053Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:04:19.670489 containerd[1544]: time="2025-07-07T06:04:19.532792313Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:04:19.670489 containerd[1544]: time="2025-07-07T06:04:19.533046883Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:04:19.670489 containerd[1544]: time="2025-07-07T06:04:19.533060783Z" level=info msg="Start snapshots syncer" Jul 7 06:04:19.670489 containerd[1544]: time="2025-07-07T06:04:19.533332093Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533530183Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533572113Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533631003Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533745763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533764913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533773393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533783403Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533793633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533804293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533814023Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533843963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533853623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533862453Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533879693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533892043Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533899283Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533907513Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533913873Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533921723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533934093Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533949623Z" level=info msg="runtime interface created" Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533954323Z" level=info msg="created NRI interface" Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533961823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533971003Z" level=info msg="Connect containerd service" Jul 7 06:04:19.670552 containerd[1544]: time="2025-07-07T06:04:19.533989823Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:04:19.671321 containerd[1544]: time="2025-07-07T06:04:19.539395980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:04:19.715854 dbus-daemon[1508]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 06:04:19.718930 dbus-daemon[1508]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1588 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 06:04:19.732627 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 06:04:19.742448 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:19.758463 containerd[1544]: time="2025-07-07T06:04:19.758430110Z" level=info msg="Start subscribing containerd event" Jul 7 06:04:19.758566 containerd[1544]: time="2025-07-07T06:04:19.758474520Z" level=info msg="Start recovering state" Jul 7 06:04:19.759430 containerd[1544]: time="2025-07-07T06:04:19.759408060Z" level=info msg="Start event monitor" Jul 7 06:04:19.759462 containerd[1544]: time="2025-07-07T06:04:19.759431890Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:04:19.759761 containerd[1544]: time="2025-07-07T06:04:19.759441900Z" level=info msg="Start streaming server" Jul 7 06:04:19.760008 containerd[1544]: time="2025-07-07T06:04:19.759767060Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:04:19.760008 containerd[1544]: time="2025-07-07T06:04:19.759777720Z" level=info msg="runtime interface starting up..." Jul 7 06:04:19.760008 containerd[1544]: time="2025-07-07T06:04:19.759783270Z" level=info msg="starting plugins..." Jul 7 06:04:19.760008 containerd[1544]: time="2025-07-07T06:04:19.759798040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:04:19.765555 containerd[1544]: time="2025-07-07T06:04:19.765527197Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:04:19.765752 containerd[1544]: time="2025-07-07T06:04:19.765729877Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:04:19.768761 update-ssh-keys[1640]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:04:19.769652 containerd[1544]: time="2025-07-07T06:04:19.769356265Z" level=info msg="containerd successfully booted in 0.335712s" Jul 7 06:04:19.774117 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:04:19.776416 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 06:04:19.790728 systemd[1]: Finished sshkeys.service. Jul 7 06:04:19.801470 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:04:19.804766 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 06:04:19.829938 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:04:19.830369 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:04:19.833356 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:04:19.865161 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:04:19.868951 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:04:19.874948 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:04:19.875604 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:04:19.909694 polkitd[1648]: Started polkitd version 126 Jul 7 06:04:19.913959 polkitd[1648]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 06:04:19.914409 polkitd[1648]: Loading rules from directory /run/polkit-1/rules.d Jul 7 06:04:19.914490 polkitd[1648]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:04:19.914897 polkitd[1648]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 06:04:19.915012 polkitd[1648]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:04:19.915101 polkitd[1648]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 06:04:19.915773 polkitd[1648]: Finished loading, compiling and executing 2 rules Jul 7 06:04:19.916350 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 06:04:19.916586 dbus-daemon[1508]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 06:04:19.916962 polkitd[1648]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 06:04:19.929519 systemd-hostnamed[1588]: Hostname set to <172-236-119-245> (transient) Jul 7 06:04:19.929549 systemd-resolved[1407]: System hostname changed to '172-236-119-245'. Jul 7 06:04:20.039606 coreos-metadata[1507]: Jul 07 06:04:20.039 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 7 06:04:20.053430 tar[1543]: linux-amd64/README.md Jul 7 06:04:20.070539 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:04:20.134246 coreos-metadata[1507]: Jul 07 06:04:20.134 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jul 7 06:04:20.315991 coreos-metadata[1507]: Jul 07 06:04:20.315 INFO Fetch successful Jul 7 06:04:20.316292 coreos-metadata[1507]: Jul 07 06:04:20.316 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jul 7 06:04:20.427892 systemd-networkd[1456]: eth0: Gained IPv6LL Jul 7 06:04:20.431083 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:04:20.432170 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:04:20.435216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:20.438871 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:04:20.459147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:04:20.568725 coreos-metadata[1507]: Jul 07 06:04:20.568 INFO Fetch successful Jul 7 06:04:20.682059 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:04:20.684143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:04:21.324744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:21.326055 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:04:21.331335 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:04:21.364626 systemd[1]: Startup finished in 2.863s (kernel) + 7.092s (initrd) + 5.137s (userspace) = 15.093s. Jul 7 06:04:21.871218 kubelet[1705]: E0707 06:04:21.870976 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:04:21.874837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:04:21.875023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:04:21.875538 systemd[1]: kubelet.service: Consumed 889ms CPU time, 266.3M memory peak. Jul 7 06:04:23.430745 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:04:23.432082 systemd[1]: Started sshd@0-172.236.119.245:22-147.75.109.163:48084.service - OpenSSH per-connection server daemon (147.75.109.163:48084). Jul 7 06:04:23.789420 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 48084 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:04:23.791144 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:23.797263 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:04:23.798922 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:04:23.806831 systemd-logind[1528]: New session 1 of user core. Jul 7 06:04:23.819800 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:04:23.823566 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:04:23.837259 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:04:23.839773 systemd-logind[1528]: New session c1 of user core. Jul 7 06:04:23.962565 systemd[1720]: Queued start job for default target default.target. Jul 7 06:04:23.975939 systemd[1720]: Created slice app.slice - User Application Slice. Jul 7 06:04:23.975966 systemd[1720]: Reached target paths.target - Paths. Jul 7 06:04:23.976281 systemd[1720]: Reached target timers.target - Timers. Jul 7 06:04:23.978059 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:04:23.990964 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:04:23.991086 systemd[1720]: Reached target sockets.target - Sockets. Jul 7 06:04:23.991300 systemd[1720]: Reached target basic.target - Basic System. Jul 7 06:04:23.991387 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:04:23.992254 systemd[1720]: Reached target default.target - Main User Target. Jul 7 06:04:23.992297 systemd[1720]: Startup finished in 146ms. Jul 7 06:04:23.992615 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:04:24.270584 systemd[1]: Started sshd@1-172.236.119.245:22-147.75.109.163:48086.service - OpenSSH per-connection server daemon (147.75.109.163:48086). Jul 7 06:04:24.621309 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 48086 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:04:24.623221 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:24.629801 systemd-logind[1528]: New session 2 of user core. Jul 7 06:04:24.638853 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:04:24.876319 sshd[1733]: Connection closed by 147.75.109.163 port 48086 Jul 7 06:04:24.877379 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:24.881958 systemd[1]: sshd@1-172.236.119.245:22-147.75.109.163:48086.service: Deactivated successfully. Jul 7 06:04:24.882567 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:04:24.884962 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:04:24.887389 systemd-logind[1528]: Removed session 2. Jul 7 06:04:24.938194 systemd[1]: Started sshd@2-172.236.119.245:22-147.75.109.163:48094.service - OpenSSH per-connection server daemon (147.75.109.163:48094). Jul 7 06:04:25.281722 sshd[1739]: Accepted publickey for core from 147.75.109.163 port 48094 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:04:25.283394 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:25.289672 systemd-logind[1528]: New session 3 of user core. Jul 7 06:04:25.297836 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:04:25.530526 sshd[1741]: Connection closed by 147.75.109.163 port 48094 Jul 7 06:04:25.531242 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:25.535441 systemd[1]: sshd@2-172.236.119.245:22-147.75.109.163:48094.service: Deactivated successfully. Jul 7 06:04:25.537612 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:04:25.538485 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:04:25.539925 systemd-logind[1528]: Removed session 3. Jul 7 06:04:25.594182 systemd[1]: Started sshd@3-172.236.119.245:22-147.75.109.163:48106.service - OpenSSH per-connection server daemon (147.75.109.163:48106). Jul 7 06:04:25.954227 sshd[1747]: Accepted publickey for core from 147.75.109.163 port 48106 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:04:25.955768 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:25.960736 systemd-logind[1528]: New session 4 of user core. Jul 7 06:04:25.974816 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:04:26.210750 sshd[1749]: Connection closed by 147.75.109.163 port 48106 Jul 7 06:04:26.211400 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:26.216509 systemd[1]: sshd@3-172.236.119.245:22-147.75.109.163:48106.service: Deactivated successfully. Jul 7 06:04:26.219032 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:04:26.220373 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:04:26.221497 systemd-logind[1528]: Removed session 4. Jul 7 06:04:26.285336 systemd[1]: Started sshd@4-172.236.119.245:22-147.75.109.163:40822.service - OpenSSH per-connection server daemon (147.75.109.163:40822). Jul 7 06:04:26.647735 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 40822 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:04:26.649613 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:26.655322 systemd-logind[1528]: New session 5 of user core. Jul 7 06:04:26.661848 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:04:26.862446 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:04:26.862759 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:26.875042 sudo[1758]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:26.928320 sshd[1757]: Connection closed by 147.75.109.163 port 40822 Jul 7 06:04:26.929273 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:26.934267 systemd[1]: sshd@4-172.236.119.245:22-147.75.109.163:40822.service: Deactivated successfully. Jul 7 06:04:26.936291 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:04:26.937180 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:04:26.939127 systemd-logind[1528]: Removed session 5. Jul 7 06:04:27.002237 systemd[1]: Started sshd@5-172.236.119.245:22-147.75.109.163:40836.service - OpenSSH per-connection server daemon (147.75.109.163:40836). Jul 7 06:04:27.357995 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 40836 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:04:27.359867 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:27.364374 systemd-logind[1528]: New session 6 of user core. Jul 7 06:04:27.375827 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:04:27.561875 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:04:27.562195 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:27.566274 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:27.571160 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:04:27.571435 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:27.579653 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:04:27.621585 augenrules[1790]: No rules Jul 7 06:04:27.622043 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:04:27.622270 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:04:27.623295 sudo[1767]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:27.675843 sshd[1766]: Connection closed by 147.75.109.163 port 40836 Jul 7 06:04:27.676252 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:27.679628 systemd[1]: sshd@5-172.236.119.245:22-147.75.109.163:40836.service: Deactivated successfully. Jul 7 06:04:27.681322 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:04:27.682328 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:04:27.683337 systemd-logind[1528]: Removed session 6. Jul 7 06:04:27.741602 systemd[1]: Started sshd@6-172.236.119.245:22-147.75.109.163:40838.service - OpenSSH per-connection server daemon (147.75.109.163:40838). Jul 7 06:04:28.101463 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 40838 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:04:28.103199 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:28.108421 systemd-logind[1528]: New session 7 of user core. Jul 7 06:04:28.122912 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:04:28.305958 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:04:28.306264 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:28.578871 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:04:28.607997 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:04:28.821032 dockerd[1820]: time="2025-07-07T06:04:28.820962598Z" level=info msg="Starting up" Jul 7 06:04:28.822865 dockerd[1820]: time="2025-07-07T06:04:28.822841087Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:04:28.856544 systemd[1]: var-lib-docker-metacopy\x2dcheck2108149424-merged.mount: Deactivated successfully. Jul 7 06:04:28.880040 dockerd[1820]: time="2025-07-07T06:04:28.879813969Z" level=info msg="Loading containers: start." Jul 7 06:04:28.893000 kernel: Initializing XFRM netlink socket Jul 7 06:04:29.130467 systemd-networkd[1456]: docker0: Link UP Jul 7 06:04:29.133367 dockerd[1820]: time="2025-07-07T06:04:29.133333292Z" level=info msg="Loading containers: done." Jul 7 06:04:29.146061 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1794467568-merged.mount: Deactivated successfully. Jul 7 06:04:29.147318 dockerd[1820]: time="2025-07-07T06:04:29.147107195Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:04:29.147318 dockerd[1820]: time="2025-07-07T06:04:29.147163345Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:04:29.147318 dockerd[1820]: time="2025-07-07T06:04:29.147265335Z" level=info msg="Initializing buildkit" Jul 7 06:04:29.169801 dockerd[1820]: time="2025-07-07T06:04:29.169778524Z" level=info msg="Completed buildkit initialization" Jul 7 06:04:29.173010 dockerd[1820]: time="2025-07-07T06:04:29.172986912Z" level=info msg="Daemon has completed initialization" Jul 7 06:04:29.173080 dockerd[1820]: time="2025-07-07T06:04:29.173022812Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:04:29.173212 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:04:29.709232 containerd[1544]: time="2025-07-07T06:04:29.709203134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 06:04:30.296892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390836599.mount: Deactivated successfully. Jul 7 06:04:31.467126 containerd[1544]: time="2025-07-07T06:04:31.467066605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:31.468058 containerd[1544]: time="2025-07-07T06:04:31.468006404Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079105" Jul 7 06:04:31.468683 containerd[1544]: time="2025-07-07T06:04:31.468620844Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:31.470742 containerd[1544]: time="2025-07-07T06:04:31.470676093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:31.472750 containerd[1544]: time="2025-07-07T06:04:31.472133192Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.762898758s" Jul 7 06:04:31.472750 containerd[1544]: time="2025-07-07T06:04:31.472177682Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 06:04:31.473168 containerd[1544]: time="2025-07-07T06:04:31.473144462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 06:04:31.881538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:04:31.883909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:32.064894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:32.073150 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:04:32.115066 kubelet[2086]: E0707 06:04:32.115024 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:04:32.121270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:04:32.121475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:04:32.122171 systemd[1]: kubelet.service: Consumed 193ms CPU time, 111.2M memory peak. Jul 7 06:04:32.759927 containerd[1544]: time="2025-07-07T06:04:32.759864868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:32.761593 containerd[1544]: time="2025-07-07T06:04:32.760901348Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018952" Jul 7 06:04:32.763654 containerd[1544]: time="2025-07-07T06:04:32.763622676Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:32.767414 containerd[1544]: time="2025-07-07T06:04:32.767392694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:32.768135 containerd[1544]: time="2025-07-07T06:04:32.768113564Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.294880942s" Jul 7 06:04:32.768208 containerd[1544]: time="2025-07-07T06:04:32.768193694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 06:04:32.769214 containerd[1544]: time="2025-07-07T06:04:32.769188294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 06:04:33.820726 containerd[1544]: time="2025-07-07T06:04:33.819781748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:33.820726 containerd[1544]: time="2025-07-07T06:04:33.820691098Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155061" Jul 7 06:04:33.821605 containerd[1544]: time="2025-07-07T06:04:33.821580237Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:33.823600 containerd[1544]: time="2025-07-07T06:04:33.823570116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:33.824646 containerd[1544]: time="2025-07-07T06:04:33.824617116Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.055401572s" Jul 7 06:04:33.824728 containerd[1544]: time="2025-07-07T06:04:33.824647806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 06:04:33.826100 containerd[1544]: time="2025-07-07T06:04:33.826070355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 06:04:34.942032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990461160.mount: Deactivated successfully. Jul 7 06:04:35.376636 containerd[1544]: time="2025-07-07T06:04:35.376318280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:35.377674 containerd[1544]: time="2025-07-07T06:04:35.377447999Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892752" Jul 7 06:04:35.378566 containerd[1544]: time="2025-07-07T06:04:35.378529059Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:35.380286 containerd[1544]: time="2025-07-07T06:04:35.380251198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:35.380792 containerd[1544]: time="2025-07-07T06:04:35.380751717Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.554645702s" Jul 7 06:04:35.380874 containerd[1544]: time="2025-07-07T06:04:35.380857667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 06:04:35.381641 containerd[1544]: time="2025-07-07T06:04:35.381453057Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 06:04:35.990131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282278597.mount: Deactivated successfully. Jul 7 06:04:36.854831 containerd[1544]: time="2025-07-07T06:04:36.854772650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:36.855775 containerd[1544]: time="2025-07-07T06:04:36.855576650Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Jul 7 06:04:36.856613 containerd[1544]: time="2025-07-07T06:04:36.856583359Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:36.859141 containerd[1544]: time="2025-07-07T06:04:36.859108148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:36.860159 containerd[1544]: time="2025-07-07T06:04:36.860084248Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.478604961s" Jul 7 06:04:36.860218 containerd[1544]: time="2025-07-07T06:04:36.860159668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 06:04:36.860631 containerd[1544]: time="2025-07-07T06:04:36.860606777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:04:37.341181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467475355.mount: Deactivated successfully. Jul 7 06:04:37.352740 containerd[1544]: time="2025-07-07T06:04:37.352689541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:37.353264 containerd[1544]: time="2025-07-07T06:04:37.353231871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jul 7 06:04:37.353764 containerd[1544]: time="2025-07-07T06:04:37.353739601Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:37.355730 containerd[1544]: time="2025-07-07T06:04:37.355080880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:37.355730 containerd[1544]: time="2025-07-07T06:04:37.355617750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.984843ms" Jul 7 06:04:37.355730 containerd[1544]: time="2025-07-07T06:04:37.355639060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:04:37.356248 containerd[1544]: time="2025-07-07T06:04:37.356225039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 06:04:37.878495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864158370.mount: Deactivated successfully. Jul 7 06:04:39.309272 containerd[1544]: time="2025-07-07T06:04:39.309210143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:39.310421 containerd[1544]: time="2025-07-07T06:04:39.310057362Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247181" Jul 7 06:04:39.310887 containerd[1544]: time="2025-07-07T06:04:39.310858342Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:39.313295 containerd[1544]: time="2025-07-07T06:04:39.313262441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:39.314586 containerd[1544]: time="2025-07-07T06:04:39.314563640Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.958315261s" Jul 7 06:04:39.314676 containerd[1544]: time="2025-07-07T06:04:39.314659680Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 06:04:41.845908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:41.846051 systemd[1]: kubelet.service: Consumed 193ms CPU time, 111.2M memory peak. Jul 7 06:04:41.849106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:41.886776 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Jul 7 06:04:41.886800 systemd[1]: Reloading... Jul 7 06:04:42.037718 zram_generator::config[2292]: No configuration found. Jul 7 06:04:42.134220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:42.245388 systemd[1]: Reloading finished in 358 ms. Jul 7 06:04:42.308168 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:04:42.308262 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:04:42.308545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:42.308584 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.3M memory peak. Jul 7 06:04:42.310098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:42.476166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:42.486145 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:04:42.521258 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:42.521258 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:04:42.521258 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:42.521588 kubelet[2347]: I0707 06:04:42.521304 2347 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:04:42.580985 kubelet[2347]: I0707 06:04:42.580959 2347 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:04:42.580985 kubelet[2347]: I0707 06:04:42.580978 2347 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:04:42.581127 kubelet[2347]: I0707 06:04:42.581108 2347 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:04:42.612491 kubelet[2347]: I0707 06:04:42.612256 2347 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:04:42.612491 kubelet[2347]: E0707 06:04:42.612441 2347 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.119.245:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.119.245:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:04:42.618536 kubelet[2347]: I0707 06:04:42.618515 2347 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:04:42.623880 kubelet[2347]: I0707 06:04:42.623865 2347 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:04:42.624084 kubelet[2347]: I0707 06:04:42.624052 2347 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:04:42.624216 kubelet[2347]: I0707 06:04:42.624078 2347 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-119-245","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:04:42.624322 kubelet[2347]: I0707 06:04:42.624221 2347 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:04:42.624322 kubelet[2347]: I0707 06:04:42.624229 2347 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:04:42.625046 kubelet[2347]: I0707 06:04:42.625024 2347 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:42.627152 kubelet[2347]: I0707 06:04:42.627135 2347 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:04:42.627152 kubelet[2347]: I0707 06:04:42.627150 2347 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:04:42.627426 kubelet[2347]: I0707 06:04:42.627176 2347 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:04:42.627426 kubelet[2347]: I0707 06:04:42.627388 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:04:42.635515 kubelet[2347]: E0707 06:04:42.635232 2347 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.119.245:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.119.245:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:04:42.635515 kubelet[2347]: E0707 06:04:42.635301 2347 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.119.245:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-119-245&limit=500&resourceVersion=0\": dial tcp 172.236.119.245:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:04:42.635668 kubelet[2347]: I0707 06:04:42.635648 2347 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:04:42.636263 kubelet[2347]: I0707 06:04:42.636244 2347 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:04:42.637079 kubelet[2347]: W0707 06:04:42.637058 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:04:42.641095 kubelet[2347]: I0707 06:04:42.640586 2347 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:04:42.641095 kubelet[2347]: I0707 06:04:42.640651 2347 server.go:1289] "Started kubelet" Jul 7 06:04:42.648872 kubelet[2347]: I0707 06:04:42.648849 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:04:42.651253 kubelet[2347]: I0707 06:04:42.651117 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:04:42.655375 kubelet[2347]: E0707 06:04:42.653677 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.119.245:6443/api/v1/namespaces/default/events\": dial tcp 172.236.119.245:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-119-245.184fe2ebd811310b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-119-245,UID:172-236-119-245,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-119-245,},FirstTimestamp:2025-07-07 06:04:42.640609547 +0000 UTC m=+0.150375366,LastTimestamp:2025-07-07 06:04:42.640609547 +0000 UTC m=+0.150375366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-119-245,}" Jul 7 06:04:42.658831 kubelet[2347]: I0707 06:04:42.656565 2347 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:04:42.658831 kubelet[2347]: I0707 06:04:42.656636 2347 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:04:42.658831 kubelet[2347]: I0707 06:04:42.657311 2347 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:04:42.659808 kubelet[2347]: I0707 06:04:42.659774 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:04:42.660186 kubelet[2347]: I0707 06:04:42.660173 2347 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:04:42.660457 kubelet[2347]: I0707 06:04:42.660446 2347 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:04:42.660550 kubelet[2347]: I0707 06:04:42.660540 2347 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:04:42.660871 kubelet[2347]: E0707 06:04:42.660853 2347 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.119.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.119.245:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:04:42.661852 kubelet[2347]: E0707 06:04:42.661837 2347 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:04:42.662217 kubelet[2347]: E0707 06:04:42.662202 2347 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-119-245\" not found" Jul 7 06:04:42.662749 kubelet[2347]: I0707 06:04:42.662736 2347 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:04:42.662896 kubelet[2347]: I0707 06:04:42.662881 2347 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:04:42.663178 kubelet[2347]: E0707 06:04:42.663159 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.119.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-119-245?timeout=10s\": dial tcp 172.236.119.245:6443: connect: connection refused" interval="200ms" Jul 7 06:04:42.664520 kubelet[2347]: I0707 06:04:42.664507 2347 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:04:42.675991 kubelet[2347]: I0707 06:04:42.675970 2347 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:04:42.675991 kubelet[2347]: I0707 06:04:42.675984 2347 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:04:42.676056 kubelet[2347]: I0707 06:04:42.675997 2347 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:42.677830 kubelet[2347]: I0707 06:04:42.677807 2347 policy_none.go:49] "None policy: Start" Jul 7 06:04:42.677830 kubelet[2347]: I0707 06:04:42.677825 2347 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:04:42.677891 kubelet[2347]: I0707 06:04:42.677835 2347 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:04:42.685076 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:04:42.685853 kubelet[2347]: I0707 06:04:42.685829 2347 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:04:42.688019 kubelet[2347]: I0707 06:04:42.688005 2347 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:04:42.688103 kubelet[2347]: I0707 06:04:42.688092 2347 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:04:42.688173 kubelet[2347]: I0707 06:04:42.688151 2347 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:04:42.688213 kubelet[2347]: I0707 06:04:42.688206 2347 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:04:42.688288 kubelet[2347]: E0707 06:04:42.688274 2347 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:04:42.690490 kubelet[2347]: E0707 06:04:42.690465 2347 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.119.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.119.245:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:04:42.694499 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:04:42.708931 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:04:42.710545 kubelet[2347]: E0707 06:04:42.710517 2347 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:04:42.711365 kubelet[2347]: I0707 06:04:42.711340 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:04:42.711745 kubelet[2347]: I0707 06:04:42.711468 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:04:42.712270 kubelet[2347]: I0707 06:04:42.712250 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:04:42.712621 kubelet[2347]: E0707 06:04:42.712602 2347 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:04:42.712659 kubelet[2347]: E0707 06:04:42.712631 2347 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-119-245\" not found" Jul 7 06:04:42.799251 systemd[1]: Created slice kubepods-burstable-pod99d47f3c57e47f47080c6f78be2b5d6f.slice - libcontainer container kubepods-burstable-pod99d47f3c57e47f47080c6f78be2b5d6f.slice. Jul 7 06:04:42.808848 kubelet[2347]: E0707 06:04:42.808652 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:42.811576 systemd[1]: Created slice kubepods-burstable-pod1222faf2ddb07af39f95df34680e6806.slice - libcontainer container kubepods-burstable-pod1222faf2ddb07af39f95df34680e6806.slice. Jul 7 06:04:42.814410 kubelet[2347]: I0707 06:04:42.814396 2347 kubelet_node_status.go:75] "Attempting to register node" node="172-236-119-245" Jul 7 06:04:42.814781 kubelet[2347]: E0707 06:04:42.814753 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.119.245:6443/api/v1/nodes\": dial tcp 172.236.119.245:6443: connect: connection refused" node="172-236-119-245" Jul 7 06:04:42.818568 kubelet[2347]: E0707 06:04:42.818521 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:42.821010 systemd[1]: Created slice kubepods-burstable-podeac919e5e9eac790d89a6d0f4149409d.slice - libcontainer container kubepods-burstable-podeac919e5e9eac790d89a6d0f4149409d.slice. Jul 7 06:04:42.825717 kubelet[2347]: E0707 06:04:42.825577 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:42.863653 kubelet[2347]: E0707 06:04:42.863614 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.119.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-119-245?timeout=10s\": dial tcp 172.236.119.245:6443: connect: connection refused" interval="400ms" Jul 7 06:04:42.961920 kubelet[2347]: I0707 06:04:42.961899 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99d47f3c57e47f47080c6f78be2b5d6f-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-119-245\" (UID: \"99d47f3c57e47f47080c6f78be2b5d6f\") " pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:42.961983 kubelet[2347]: I0707 06:04:42.961923 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-ca-certs\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:42.961983 kubelet[2347]: I0707 06:04:42.961938 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-flexvolume-dir\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:42.961983 kubelet[2347]: I0707 06:04:42.961951 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-kubeconfig\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:42.961983 kubelet[2347]: I0707 06:04:42.961964 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:42.961983 kubelet[2347]: I0707 06:04:42.961976 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eac919e5e9eac790d89a6d0f4149409d-kubeconfig\") pod \"kube-scheduler-172-236-119-245\" (UID: \"eac919e5e9eac790d89a6d0f4149409d\") " pod="kube-system/kube-scheduler-172-236-119-245" Jul 7 06:04:42.962082 kubelet[2347]: I0707 06:04:42.961988 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99d47f3c57e47f47080c6f78be2b5d6f-ca-certs\") pod \"kube-apiserver-172-236-119-245\" (UID: \"99d47f3c57e47f47080c6f78be2b5d6f\") " pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:42.962082 kubelet[2347]: I0707 06:04:42.961998 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99d47f3c57e47f47080c6f78be2b5d6f-k8s-certs\") pod \"kube-apiserver-172-236-119-245\" (UID: \"99d47f3c57e47f47080c6f78be2b5d6f\") " pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:42.962082 kubelet[2347]: I0707 06:04:42.962009 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-k8s-certs\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:43.016586 kubelet[2347]: I0707 06:04:43.016568 2347 kubelet_node_status.go:75] "Attempting to register node" node="172-236-119-245" Jul 7 06:04:43.016856 kubelet[2347]: E0707 06:04:43.016832 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.119.245:6443/api/v1/nodes\": dial tcp 172.236.119.245:6443: connect: connection refused" node="172-236-119-245" Jul 7 06:04:43.109812 kubelet[2347]: E0707 06:04:43.109736 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.110158 containerd[1544]: time="2025-07-07T06:04:43.110127822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-119-245,Uid:99d47f3c57e47f47080c6f78be2b5d6f,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:43.119171 kubelet[2347]: E0707 06:04:43.119119 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.119673 containerd[1544]: time="2025-07-07T06:04:43.119545917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-119-245,Uid:1222faf2ddb07af39f95df34680e6806,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:43.127013 kubelet[2347]: E0707 06:04:43.126874 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.137096 containerd[1544]: time="2025-07-07T06:04:43.137066288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-119-245,Uid:eac919e5e9eac790d89a6d0f4149409d,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:43.138804 containerd[1544]: time="2025-07-07T06:04:43.138768757Z" level=info msg="connecting to shim bba8ec9a061366cba972567251796cd85feaeaf4f67fda0199b3008a4a960daa" address="unix:///run/containerd/s/e56ec3b8cc6726db10e61db8427853e71c94827b551c365407872c7f1673be8e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:43.148003 containerd[1544]: time="2025-07-07T06:04:43.147791883Z" level=info msg="connecting to shim dbd05077589360bc5d4195ad549a44b399ef86ebdd767f93d4c1078774f65aa4" address="unix:///run/containerd/s/ff39c89c50df65c7b034b7e502052f2e83bbb7cbd083696e1f3324530bcd10c9" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:43.168855 containerd[1544]: time="2025-07-07T06:04:43.168820532Z" level=info msg="connecting to shim c6212fadbcf8ebb438d88b49697e0e344b6505b6e62d653d112a11f2757fab7a" address="unix:///run/containerd/s/70d1b0d0be34c85d0eb34fde90b1045770f2676748a25fbf541b0e609351434b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:43.186832 systemd[1]: Started cri-containerd-bba8ec9a061366cba972567251796cd85feaeaf4f67fda0199b3008a4a960daa.scope - libcontainer container bba8ec9a061366cba972567251796cd85feaeaf4f67fda0199b3008a4a960daa. Jul 7 06:04:43.188053 systemd[1]: Started cri-containerd-dbd05077589360bc5d4195ad549a44b399ef86ebdd767f93d4c1078774f65aa4.scope - libcontainer container dbd05077589360bc5d4195ad549a44b399ef86ebdd767f93d4c1078774f65aa4. Jul 7 06:04:43.204280 systemd[1]: Started cri-containerd-c6212fadbcf8ebb438d88b49697e0e344b6505b6e62d653d112a11f2757fab7a.scope - libcontainer container c6212fadbcf8ebb438d88b49697e0e344b6505b6e62d653d112a11f2757fab7a. Jul 7 06:04:43.266399 kubelet[2347]: E0707 06:04:43.266239 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.119.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-119-245?timeout=10s\": dial tcp 172.236.119.245:6443: connect: connection refused" interval="800ms" Jul 7 06:04:43.269927 containerd[1544]: time="2025-07-07T06:04:43.269847922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-119-245,Uid:1222faf2ddb07af39f95df34680e6806,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbd05077589360bc5d4195ad549a44b399ef86ebdd767f93d4c1078774f65aa4\"" Jul 7 06:04:43.271876 kubelet[2347]: E0707 06:04:43.271860 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.278721 containerd[1544]: time="2025-07-07T06:04:43.278234528Z" level=info msg="CreateContainer within sandbox \"dbd05077589360bc5d4195ad549a44b399ef86ebdd767f93d4c1078774f65aa4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:04:43.289903 containerd[1544]: time="2025-07-07T06:04:43.289883452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-119-245,Uid:eac919e5e9eac790d89a6d0f4149409d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6212fadbcf8ebb438d88b49697e0e344b6505b6e62d653d112a11f2757fab7a\"" Jul 7 06:04:43.290897 kubelet[2347]: E0707 06:04:43.290876 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.292088 containerd[1544]: time="2025-07-07T06:04:43.292067571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-119-245,Uid:99d47f3c57e47f47080c6f78be2b5d6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bba8ec9a061366cba972567251796cd85feaeaf4f67fda0199b3008a4a960daa\"" Jul 7 06:04:43.293803 kubelet[2347]: E0707 06:04:43.293781 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.293958 containerd[1544]: time="2025-07-07T06:04:43.293940360Z" level=info msg="Container c3f298d98989329887ff8674038a741324c3285983b00dfc705511ce8c86e05d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:43.294267 containerd[1544]: time="2025-07-07T06:04:43.294025150Z" level=info msg="CreateContainer within sandbox \"c6212fadbcf8ebb438d88b49697e0e344b6505b6e62d653d112a11f2757fab7a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:04:43.297769 containerd[1544]: time="2025-07-07T06:04:43.297749158Z" level=info msg="CreateContainer within sandbox \"bba8ec9a061366cba972567251796cd85feaeaf4f67fda0199b3008a4a960daa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:04:43.300342 containerd[1544]: time="2025-07-07T06:04:43.300322097Z" level=info msg="CreateContainer within sandbox \"dbd05077589360bc5d4195ad549a44b399ef86ebdd767f93d4c1078774f65aa4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3f298d98989329887ff8674038a741324c3285983b00dfc705511ce8c86e05d\"" Jul 7 06:04:43.300940 containerd[1544]: time="2025-07-07T06:04:43.300923066Z" level=info msg="StartContainer for \"c3f298d98989329887ff8674038a741324c3285983b00dfc705511ce8c86e05d\"" Jul 7 06:04:43.301943 containerd[1544]: time="2025-07-07T06:04:43.301922886Z" level=info msg="connecting to shim c3f298d98989329887ff8674038a741324c3285983b00dfc705511ce8c86e05d" address="unix:///run/containerd/s/ff39c89c50df65c7b034b7e502052f2e83bbb7cbd083696e1f3324530bcd10c9" protocol=ttrpc version=3 Jul 7 06:04:43.306322 containerd[1544]: time="2025-07-07T06:04:43.305859234Z" level=info msg="Container c1e16fb175365ffe9e9880eb442c0156205c2816896733f2890aae9d5412d864: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:43.307128 containerd[1544]: time="2025-07-07T06:04:43.307102863Z" level=info msg="Container dab7340e4f926665a325530ae8a8c27370a03a831c5168a2161484bbff37f750: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:43.311717 containerd[1544]: time="2025-07-07T06:04:43.311676331Z" level=info msg="CreateContainer within sandbox \"c6212fadbcf8ebb438d88b49697e0e344b6505b6e62d653d112a11f2757fab7a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1e16fb175365ffe9e9880eb442c0156205c2816896733f2890aae9d5412d864\"" Jul 7 06:04:43.312320 containerd[1544]: time="2025-07-07T06:04:43.312073561Z" level=info msg="StartContainer for \"c1e16fb175365ffe9e9880eb442c0156205c2816896733f2890aae9d5412d864\"" Jul 7 06:04:43.314121 containerd[1544]: time="2025-07-07T06:04:43.314099960Z" level=info msg="CreateContainer within sandbox \"bba8ec9a061366cba972567251796cd85feaeaf4f67fda0199b3008a4a960daa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dab7340e4f926665a325530ae8a8c27370a03a831c5168a2161484bbff37f750\"" Jul 7 06:04:43.314906 containerd[1544]: time="2025-07-07T06:04:43.314263510Z" level=info msg="connecting to shim c1e16fb175365ffe9e9880eb442c0156205c2816896733f2890aae9d5412d864" address="unix:///run/containerd/s/70d1b0d0be34c85d0eb34fde90b1045770f2676748a25fbf541b0e609351434b" protocol=ttrpc version=3 Jul 7 06:04:43.316496 containerd[1544]: time="2025-07-07T06:04:43.316470889Z" level=info msg="StartContainer for \"dab7340e4f926665a325530ae8a8c27370a03a831c5168a2161484bbff37f750\"" Jul 7 06:04:43.321613 containerd[1544]: time="2025-07-07T06:04:43.321551076Z" level=info msg="connecting to shim dab7340e4f926665a325530ae8a8c27370a03a831c5168a2161484bbff37f750" address="unix:///run/containerd/s/e56ec3b8cc6726db10e61db8427853e71c94827b551c365407872c7f1673be8e" protocol=ttrpc version=3 Jul 7 06:04:43.325850 systemd[1]: Started cri-containerd-c3f298d98989329887ff8674038a741324c3285983b00dfc705511ce8c86e05d.scope - libcontainer container c3f298d98989329887ff8674038a741324c3285983b00dfc705511ce8c86e05d. Jul 7 06:04:43.348815 systemd[1]: Started cri-containerd-dab7340e4f926665a325530ae8a8c27370a03a831c5168a2161484bbff37f750.scope - libcontainer container dab7340e4f926665a325530ae8a8c27370a03a831c5168a2161484bbff37f750. Jul 7 06:04:43.352572 systemd[1]: Started cri-containerd-c1e16fb175365ffe9e9880eb442c0156205c2816896733f2890aae9d5412d864.scope - libcontainer container c1e16fb175365ffe9e9880eb442c0156205c2816896733f2890aae9d5412d864. Jul 7 06:04:43.426125 kubelet[2347]: I0707 06:04:43.426059 2347 kubelet_node_status.go:75] "Attempting to register node" node="172-236-119-245" Jul 7 06:04:43.426826 kubelet[2347]: E0707 06:04:43.426608 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.119.245:6443/api/v1/nodes\": dial tcp 172.236.119.245:6443: connect: connection refused" node="172-236-119-245" Jul 7 06:04:43.442171 containerd[1544]: time="2025-07-07T06:04:43.442133996Z" level=info msg="StartContainer for \"c3f298d98989329887ff8674038a741324c3285983b00dfc705511ce8c86e05d\" returns successfully" Jul 7 06:04:43.449309 containerd[1544]: time="2025-07-07T06:04:43.449281062Z" level=info msg="StartContainer for \"dab7340e4f926665a325530ae8a8c27370a03a831c5168a2161484bbff37f750\" returns successfully" Jul 7 06:04:43.477022 containerd[1544]: time="2025-07-07T06:04:43.476986048Z" level=info msg="StartContainer for \"c1e16fb175365ffe9e9880eb442c0156205c2816896733f2890aae9d5412d864\" returns successfully" Jul 7 06:04:43.492290 kubelet[2347]: E0707 06:04:43.492251 2347 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.119.245:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.119.245:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:04:43.700428 kubelet[2347]: E0707 06:04:43.700171 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:43.700428 kubelet[2347]: E0707 06:04:43.700284 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.706233 kubelet[2347]: E0707 06:04:43.706069 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:43.706233 kubelet[2347]: E0707 06:04:43.706172 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:43.709339 kubelet[2347]: E0707 06:04:43.709273 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:43.709586 kubelet[2347]: E0707 06:04:43.709532 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:44.229314 kubelet[2347]: I0707 06:04:44.229154 2347 kubelet_node_status.go:75] "Attempting to register node" node="172-236-119-245" Jul 7 06:04:44.710565 kubelet[2347]: E0707 06:04:44.710541 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:44.712722 kubelet[2347]: E0707 06:04:44.711390 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:44.712722 kubelet[2347]: E0707 06:04:44.710996 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:44.712722 kubelet[2347]: E0707 06:04:44.711689 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:44.760116 kubelet[2347]: E0707 06:04:44.760067 2347 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:44.761068 kubelet[2347]: E0707 06:04:44.761054 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:45.200549 kubelet[2347]: E0707 06:04:45.200515 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-119-245\" not found" node="172-236-119-245" Jul 7 06:04:45.358744 kubelet[2347]: I0707 06:04:45.358678 2347 kubelet_node_status.go:78] "Successfully registered node" node="172-236-119-245" Jul 7 06:04:45.358861 kubelet[2347]: E0707 06:04:45.358754 2347 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-236-119-245\": node \"172-236-119-245\" not found" Jul 7 06:04:45.363749 kubelet[2347]: I0707 06:04:45.363718 2347 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:45.376651 kubelet[2347]: E0707 06:04:45.376611 2347 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-119-245\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:45.376718 kubelet[2347]: I0707 06:04:45.376658 2347 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:45.379326 kubelet[2347]: E0707 06:04:45.379296 2347 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-119-245\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:45.379326 kubelet[2347]: I0707 06:04:45.379317 2347 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-119-245" Jul 7 06:04:45.380419 kubelet[2347]: E0707 06:04:45.380383 2347 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-119-245\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-119-245" Jul 7 06:04:45.633228 kubelet[2347]: I0707 06:04:45.633197 2347 apiserver.go:52] "Watching apiserver" Jul 7 06:04:45.660610 kubelet[2347]: I0707 06:04:45.660567 2347 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:04:46.235601 kubelet[2347]: I0707 06:04:46.235544 2347 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-119-245" Jul 7 06:04:46.240150 kubelet[2347]: E0707 06:04:46.239993 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:46.710624 kubelet[2347]: E0707 06:04:46.710568 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:47.434386 systemd[1]: Reload requested from client PID 2632 ('systemctl') (unit session-7.scope)... Jul 7 06:04:47.434403 systemd[1]: Reloading... Jul 7 06:04:47.539737 zram_generator::config[2688]: No configuration found. Jul 7 06:04:47.613673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:47.728233 systemd[1]: Reloading finished in 293 ms. Jul 7 06:04:47.770028 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:47.797106 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:04:47.797408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:47.797458 systemd[1]: kubelet.service: Consumed 534ms CPU time, 131.2M memory peak. Jul 7 06:04:47.799160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:47.987368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:47.997004 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:04:48.038764 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:48.038764 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:04:48.038764 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:48.039093 kubelet[2727]: I0707 06:04:48.038799 2727 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:04:48.044021 kubelet[2727]: I0707 06:04:48.043991 2727 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:04:48.044021 kubelet[2727]: I0707 06:04:48.044009 2727 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:04:48.044201 kubelet[2727]: I0707 06:04:48.044174 2727 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:04:48.045300 kubelet[2727]: I0707 06:04:48.045277 2727 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 06:04:48.052268 kubelet[2727]: I0707 06:04:48.051908 2727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:04:48.056511 kubelet[2727]: I0707 06:04:48.056481 2727 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:04:48.060737 kubelet[2727]: I0707 06:04:48.059390 2727 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:04:48.060737 kubelet[2727]: I0707 06:04:48.059622 2727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:04:48.060737 kubelet[2727]: I0707 06:04:48.059640 2727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-119-245","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:04:48.060737 kubelet[2727]: I0707 06:04:48.059793 2727 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:04:48.060923 kubelet[2727]: I0707 06:04:48.059802 2727 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:04:48.060923 kubelet[2727]: I0707 06:04:48.059848 2727 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:48.060923 kubelet[2727]: I0707 06:04:48.059990 2727 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:04:48.060923 kubelet[2727]: I0707 06:04:48.060000 2727 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:04:48.060923 kubelet[2727]: I0707 06:04:48.060021 2727 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:04:48.060923 kubelet[2727]: I0707 06:04:48.060034 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:04:48.062002 kubelet[2727]: I0707 06:04:48.061988 2727 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:04:48.062431 kubelet[2727]: I0707 06:04:48.062417 2727 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:04:48.065324 kubelet[2727]: I0707 06:04:48.065292 2727 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:04:48.065429 kubelet[2727]: I0707 06:04:48.065419 2727 server.go:1289] "Started kubelet" Jul 7 06:04:48.067827 kubelet[2727]: I0707 06:04:48.067797 2727 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:04:48.068657 kubelet[2727]: I0707 06:04:48.068630 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:04:48.068994 kubelet[2727]: I0707 06:04:48.068978 2727 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:04:48.071113 kubelet[2727]: I0707 06:04:48.070150 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:04:48.074746 kubelet[2727]: I0707 06:04:48.074657 2727 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:04:48.075319 kubelet[2727]: E0707 06:04:48.075291 2727 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-119-245\" not found" Jul 7 06:04:48.076526 kubelet[2727]: I0707 06:04:48.076503 2727 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:04:48.076624 kubelet[2727]: I0707 06:04:48.076605 2727 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:04:48.079141 kubelet[2727]: I0707 06:04:48.079026 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:04:48.079275 kubelet[2727]: I0707 06:04:48.079236 2727 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:04:48.080171 kubelet[2727]: I0707 06:04:48.080138 2727 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:04:48.080253 kubelet[2727]: I0707 06:04:48.080226 2727 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:04:48.082750 kubelet[2727]: I0707 06:04:48.082730 2727 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:04:48.110331 kubelet[2727]: I0707 06:04:48.110307 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:04:48.113068 kubelet[2727]: I0707 06:04:48.113048 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:04:48.113068 kubelet[2727]: I0707 06:04:48.113068 2727 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:04:48.113158 kubelet[2727]: I0707 06:04:48.113083 2727 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:04:48.113158 kubelet[2727]: I0707 06:04:48.113090 2727 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:04:48.113158 kubelet[2727]: E0707 06:04:48.113125 2727 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136570 2727 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136583 2727 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136598 2727 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136693 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136720 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136734 2727 policy_none.go:49] "None policy: Start" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136743 2727 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136754 2727 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:04:48.137502 kubelet[2727]: I0707 06:04:48.136831 2727 state_mem.go:75] "Updated machine memory state" Jul 7 06:04:48.141174 kubelet[2727]: E0707 06:04:48.141147 2727 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:04:48.141311 kubelet[2727]: I0707 06:04:48.141283 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:04:48.141341 kubelet[2727]: I0707 06:04:48.141301 2727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:04:48.141630 kubelet[2727]: I0707 06:04:48.141606 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:04:48.142964 kubelet[2727]: E0707 06:04:48.142949 2727 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:04:48.214684 kubelet[2727]: I0707 06:04:48.214668 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-119-245" Jul 7 06:04:48.214861 kubelet[2727]: I0707 06:04:48.214834 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:48.215277 kubelet[2727]: I0707 06:04:48.215049 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:48.222091 kubelet[2727]: E0707 06:04:48.222061 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-119-245\" already exists" pod="kube-system/kube-scheduler-172-236-119-245" Jul 7 06:04:48.245888 kubelet[2727]: I0707 06:04:48.245801 2727 kubelet_node_status.go:75] "Attempting to register node" node="172-236-119-245" Jul 7 06:04:48.254012 kubelet[2727]: I0707 06:04:48.253794 2727 kubelet_node_status.go:124] "Node was previously registered" node="172-236-119-245" Jul 7 06:04:48.254012 kubelet[2727]: I0707 06:04:48.253839 2727 kubelet_node_status.go:78] "Successfully registered node" node="172-236-119-245" Jul 7 06:04:48.378532 kubelet[2727]: I0707 06:04:48.378429 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99d47f3c57e47f47080c6f78be2b5d6f-ca-certs\") pod \"kube-apiserver-172-236-119-245\" (UID: \"99d47f3c57e47f47080c6f78be2b5d6f\") " pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:48.378532 kubelet[2727]: I0707 06:04:48.378513 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99d47f3c57e47f47080c6f78be2b5d6f-k8s-certs\") pod \"kube-apiserver-172-236-119-245\" (UID: \"99d47f3c57e47f47080c6f78be2b5d6f\") " pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:48.378532 kubelet[2727]: I0707 06:04:48.378533 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99d47f3c57e47f47080c6f78be2b5d6f-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-119-245\" (UID: \"99d47f3c57e47f47080c6f78be2b5d6f\") " pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:48.378661 kubelet[2727]: I0707 06:04:48.378551 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-ca-certs\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:48.378661 kubelet[2727]: I0707 06:04:48.378568 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-flexvolume-dir\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:48.378661 kubelet[2727]: I0707 06:04:48.378584 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-k8s-certs\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:48.378661 kubelet[2727]: I0707 06:04:48.378598 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-kubeconfig\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:48.378661 kubelet[2727]: I0707 06:04:48.378613 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1222faf2ddb07af39f95df34680e6806-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-119-245\" (UID: \"1222faf2ddb07af39f95df34680e6806\") " pod="kube-system/kube-controller-manager-172-236-119-245" Jul 7 06:04:48.378814 kubelet[2727]: I0707 06:04:48.378630 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eac919e5e9eac790d89a6d0f4149409d-kubeconfig\") pod \"kube-scheduler-172-236-119-245\" (UID: \"eac919e5e9eac790d89a6d0f4149409d\") " pod="kube-system/kube-scheduler-172-236-119-245" Jul 7 06:04:48.521369 kubelet[2727]: E0707 06:04:48.521257 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:48.521813 kubelet[2727]: E0707 06:04:48.521773 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:48.522441 kubelet[2727]: E0707 06:04:48.522421 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:49.062249 kubelet[2727]: I0707 06:04:49.061199 2727 apiserver.go:52] "Watching apiserver" Jul 7 06:04:49.077335 kubelet[2727]: I0707 06:04:49.077269 2727 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:04:49.130797 kubelet[2727]: I0707 06:04:49.130770 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:49.131133 kubelet[2727]: E0707 06:04:49.131116 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:49.131357 kubelet[2727]: E0707 06:04:49.131339 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:49.138069 kubelet[2727]: E0707 06:04:49.138041 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-119-245\" already exists" pod="kube-system/kube-apiserver-172-236-119-245" Jul 7 06:04:49.138303 kubelet[2727]: E0707 06:04:49.138267 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:49.153050 kubelet[2727]: I0707 06:04:49.153008 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-119-245" podStartSLOduration=1.15298888 podStartE2EDuration="1.15298888s" podCreationTimestamp="2025-07-07 06:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:49.15285078 +0000 UTC m=+1.149975936" watchObservedRunningTime="2025-07-07 06:04:49.15298888 +0000 UTC m=+1.150114036" Jul 7 06:04:49.163939 kubelet[2727]: I0707 06:04:49.163902 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-119-245" podStartSLOduration=1.163895984 podStartE2EDuration="1.163895984s" podCreationTimestamp="2025-07-07 06:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:49.159147007 +0000 UTC m=+1.156272173" watchObservedRunningTime="2025-07-07 06:04:49.163895984 +0000 UTC m=+1.161021140" Jul 7 06:04:49.169827 kubelet[2727]: I0707 06:04:49.169793 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-119-245" podStartSLOduration=3.169785111 podStartE2EDuration="3.169785111s" podCreationTimestamp="2025-07-07 06:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:49.164193844 +0000 UTC m=+1.161319000" watchObservedRunningTime="2025-07-07 06:04:49.169785111 +0000 UTC m=+1.166910267" Jul 7 06:04:49.962632 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 06:04:50.132641 kubelet[2727]: E0707 06:04:50.132142 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:50.132641 kubelet[2727]: E0707 06:04:50.132221 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:54.115742 kubelet[2727]: I0707 06:04:54.115328 2727 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:04:54.116552 containerd[1544]: time="2025-07-07T06:04:54.115993731Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:04:54.116831 kubelet[2727]: I0707 06:04:54.116671 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:04:55.146038 systemd[1]: Created slice kubepods-besteffort-pod37560c29_dfbe_4946_907e_ab89b25420d7.slice - libcontainer container kubepods-besteffort-pod37560c29_dfbe_4946_907e_ab89b25420d7.slice. Jul 7 06:04:55.219126 kubelet[2727]: I0707 06:04:55.218879 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37560c29-dfbe-4946-907e-ab89b25420d7-kube-proxy\") pod \"kube-proxy-rtj2l\" (UID: \"37560c29-dfbe-4946-907e-ab89b25420d7\") " pod="kube-system/kube-proxy-rtj2l" Jul 7 06:04:55.219126 kubelet[2727]: I0707 06:04:55.218951 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37560c29-dfbe-4946-907e-ab89b25420d7-xtables-lock\") pod \"kube-proxy-rtj2l\" (UID: \"37560c29-dfbe-4946-907e-ab89b25420d7\") " pod="kube-system/kube-proxy-rtj2l" Jul 7 06:04:55.219126 kubelet[2727]: I0707 06:04:55.218987 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37560c29-dfbe-4946-907e-ab89b25420d7-lib-modules\") pod \"kube-proxy-rtj2l\" (UID: \"37560c29-dfbe-4946-907e-ab89b25420d7\") " pod="kube-system/kube-proxy-rtj2l" Jul 7 06:04:55.219126 kubelet[2727]: I0707 06:04:55.219006 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxt4d\" (UniqueName: \"kubernetes.io/projected/37560c29-dfbe-4946-907e-ab89b25420d7-kube-api-access-bxt4d\") pod \"kube-proxy-rtj2l\" (UID: \"37560c29-dfbe-4946-907e-ab89b25420d7\") " pod="kube-system/kube-proxy-rtj2l" Jul 7 06:04:55.259327 systemd[1]: Created slice kubepods-besteffort-pod368f605b_f1aa_441b_ac42_f9b990af83e1.slice - libcontainer container kubepods-besteffort-pod368f605b_f1aa_441b_ac42_f9b990af83e1.slice. Jul 7 06:04:55.319879 kubelet[2727]: I0707 06:04:55.319812 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbcw\" (UniqueName: \"kubernetes.io/projected/368f605b-f1aa-441b-ac42-f9b990af83e1-kube-api-access-2mbcw\") pod \"tigera-operator-747864d56d-6f5d5\" (UID: \"368f605b-f1aa-441b-ac42-f9b990af83e1\") " pod="tigera-operator/tigera-operator-747864d56d-6f5d5" Jul 7 06:04:55.320880 kubelet[2727]: I0707 06:04:55.319926 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/368f605b-f1aa-441b-ac42-f9b990af83e1-var-lib-calico\") pod \"tigera-operator-747864d56d-6f5d5\" (UID: \"368f605b-f1aa-441b-ac42-f9b990af83e1\") " pod="tigera-operator/tigera-operator-747864d56d-6f5d5" Jul 7 06:04:55.453044 kubelet[2727]: E0707 06:04:55.452928 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:55.454384 containerd[1544]: time="2025-07-07T06:04:55.454331315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rtj2l,Uid:37560c29-dfbe-4946-907e-ab89b25420d7,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:55.473765 containerd[1544]: time="2025-07-07T06:04:55.473191924Z" level=info msg="connecting to shim f59c9835b0de90ca225bfe1bb8e70f8f54a33668786d7dc17ee8f85d3d4ad915" address="unix:///run/containerd/s/256048d0b788b567faf73be0918e13af892a82d95f1150a50204fcb428915022" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:55.501822 systemd[1]: Started cri-containerd-f59c9835b0de90ca225bfe1bb8e70f8f54a33668786d7dc17ee8f85d3d4ad915.scope - libcontainer container f59c9835b0de90ca225bfe1bb8e70f8f54a33668786d7dc17ee8f85d3d4ad915. Jul 7 06:04:55.532048 containerd[1544]: time="2025-07-07T06:04:55.532019290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rtj2l,Uid:37560c29-dfbe-4946-907e-ab89b25420d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f59c9835b0de90ca225bfe1bb8e70f8f54a33668786d7dc17ee8f85d3d4ad915\"" Jul 7 06:04:55.533016 kubelet[2727]: E0707 06:04:55.532992 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:55.541643 containerd[1544]: time="2025-07-07T06:04:55.541606792Z" level=info msg="CreateContainer within sandbox \"f59c9835b0de90ca225bfe1bb8e70f8f54a33668786d7dc17ee8f85d3d4ad915\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:04:55.551904 containerd[1544]: time="2025-07-07T06:04:55.551875342Z" level=info msg="Container b464ed287d7bbd9181191a9f46ae30b20708c201a23df69b58e92e21440b4592: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:55.556582 containerd[1544]: time="2025-07-07T06:04:55.556536335Z" level=info msg="CreateContainer within sandbox \"f59c9835b0de90ca225bfe1bb8e70f8f54a33668786d7dc17ee8f85d3d4ad915\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b464ed287d7bbd9181191a9f46ae30b20708c201a23df69b58e92e21440b4592\"" Jul 7 06:04:55.559035 containerd[1544]: time="2025-07-07T06:04:55.557923442Z" level=info msg="StartContainer for \"b464ed287d7bbd9181191a9f46ae30b20708c201a23df69b58e92e21440b4592\"" Jul 7 06:04:55.560075 containerd[1544]: time="2025-07-07T06:04:55.560043067Z" level=info msg="connecting to shim b464ed287d7bbd9181191a9f46ae30b20708c201a23df69b58e92e21440b4592" address="unix:///run/containerd/s/256048d0b788b567faf73be0918e13af892a82d95f1150a50204fcb428915022" protocol=ttrpc version=3 Jul 7 06:04:55.565153 containerd[1544]: time="2025-07-07T06:04:55.565116973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6f5d5,Uid:368f605b-f1aa-441b-ac42-f9b990af83e1,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:04:55.582719 containerd[1544]: time="2025-07-07T06:04:55.582643323Z" level=info msg="connecting to shim 8bb9be50fa71999f9c84a0a0be624f33ccbb2c758e53f702d7f9bbf49f79fda7" address="unix:///run/containerd/s/825810340b5fa0bf6ef58f483533d61b3f28bc5567169fcedf20b04c96ea12ea" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:55.583911 systemd[1]: Started cri-containerd-b464ed287d7bbd9181191a9f46ae30b20708c201a23df69b58e92e21440b4592.scope - libcontainer container b464ed287d7bbd9181191a9f46ae30b20708c201a23df69b58e92e21440b4592. Jul 7 06:04:55.608910 systemd[1]: Started cri-containerd-8bb9be50fa71999f9c84a0a0be624f33ccbb2c758e53f702d7f9bbf49f79fda7.scope - libcontainer container 8bb9be50fa71999f9c84a0a0be624f33ccbb2c758e53f702d7f9bbf49f79fda7. Jul 7 06:04:55.649468 containerd[1544]: time="2025-07-07T06:04:55.649409740Z" level=info msg="StartContainer for \"b464ed287d7bbd9181191a9f46ae30b20708c201a23df69b58e92e21440b4592\" returns successfully" Jul 7 06:04:55.673108 kubelet[2727]: E0707 06:04:55.673070 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:55.685334 containerd[1544]: time="2025-07-07T06:04:55.685109049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6f5d5,Uid:368f605b-f1aa-441b-ac42-f9b990af83e1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8bb9be50fa71999f9c84a0a0be624f33ccbb2c758e53f702d7f9bbf49f79fda7\"" Jul 7 06:04:55.689723 containerd[1544]: time="2025-07-07T06:04:55.688897737Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:04:55.814038 kubelet[2727]: E0707 06:04:55.813610 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:56.145578 kubelet[2727]: E0707 06:04:56.145535 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:56.149720 kubelet[2727]: E0707 06:04:56.149175 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:56.149720 kubelet[2727]: E0707 06:04:56.149652 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:56.168755 kubelet[2727]: I0707 06:04:56.168696 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rtj2l" podStartSLOduration=1.168684751 podStartE2EDuration="1.168684751s" podCreationTimestamp="2025-07-07 06:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:56.155894919 +0000 UTC m=+8.153020075" watchObservedRunningTime="2025-07-07 06:04:56.168684751 +0000 UTC m=+8.165809907" Jul 7 06:04:56.664890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897090459.mount: Deactivated successfully. Jul 7 06:04:56.979130 kubelet[2727]: E0707 06:04:56.978433 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:57.152227 kubelet[2727]: E0707 06:04:57.152198 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:57.152862 kubelet[2727]: E0707 06:04:57.152552 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:04:57.329250 containerd[1544]: time="2025-07-07T06:04:57.328974877Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:57.330057 containerd[1544]: time="2025-07-07T06:04:57.329850793Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:04:57.330839 containerd[1544]: time="2025-07-07T06:04:57.330814480Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:57.332904 containerd[1544]: time="2025-07-07T06:04:57.332874650Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:57.334232 containerd[1544]: time="2025-07-07T06:04:57.333920564Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.644998308s" Jul 7 06:04:57.334232 containerd[1544]: time="2025-07-07T06:04:57.333946834Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:04:57.340505 containerd[1544]: time="2025-07-07T06:04:57.340482219Z" level=info msg="CreateContainer within sandbox \"8bb9be50fa71999f9c84a0a0be624f33ccbb2c758e53f702d7f9bbf49f79fda7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:04:57.351721 containerd[1544]: time="2025-07-07T06:04:57.349781162Z" level=info msg="Container 12a1b1d4788e16ed9c5313f902ef21ecf59fd6ba494769c6607b2664efb8f00c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:57.351598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300927223.mount: Deactivated successfully. Jul 7 06:04:57.367303 containerd[1544]: time="2025-07-07T06:04:57.367067921Z" level=info msg="CreateContainer within sandbox \"8bb9be50fa71999f9c84a0a0be624f33ccbb2c758e53f702d7f9bbf49f79fda7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"12a1b1d4788e16ed9c5313f902ef21ecf59fd6ba494769c6607b2664efb8f00c\"" Jul 7 06:04:57.367854 containerd[1544]: time="2025-07-07T06:04:57.367811549Z" level=info msg="StartContainer for \"12a1b1d4788e16ed9c5313f902ef21ecf59fd6ba494769c6607b2664efb8f00c\"" Jul 7 06:04:57.368551 containerd[1544]: time="2025-07-07T06:04:57.368447470Z" level=info msg="connecting to shim 12a1b1d4788e16ed9c5313f902ef21ecf59fd6ba494769c6607b2664efb8f00c" address="unix:///run/containerd/s/825810340b5fa0bf6ef58f483533d61b3f28bc5567169fcedf20b04c96ea12ea" protocol=ttrpc version=3 Jul 7 06:04:57.393826 systemd[1]: Started cri-containerd-12a1b1d4788e16ed9c5313f902ef21ecf59fd6ba494769c6607b2664efb8f00c.scope - libcontainer container 12a1b1d4788e16ed9c5313f902ef21ecf59fd6ba494769c6607b2664efb8f00c. Jul 7 06:04:57.427637 containerd[1544]: time="2025-07-07T06:04:57.427519448Z" level=info msg="StartContainer for \"12a1b1d4788e16ed9c5313f902ef21ecf59fd6ba494769c6607b2664efb8f00c\" returns successfully" Jul 7 06:05:02.802851 sudo[1802]: pam_unix(sudo:session): session closed for user root Jul 7 06:05:02.854115 sshd[1801]: Connection closed by 147.75.109.163 port 40838 Jul 7 06:05:02.855042 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:02.859307 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:05:02.861490 systemd[1]: sshd@6-172.236.119.245:22-147.75.109.163:40838.service: Deactivated successfully. Jul 7 06:05:02.865075 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:05:02.865268 systemd[1]: session-7.scope: Consumed 4.374s CPU time, 230.6M memory peak. Jul 7 06:05:02.868169 systemd-logind[1528]: Removed session 7. Jul 7 06:05:04.272782 update_engine[1530]: I20250707 06:05:04.272142 1530 update_attempter.cc:509] Updating boot flags... Jul 7 06:05:06.030163 kubelet[2727]: I0707 06:05:06.030086 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-6f5d5" podStartSLOduration=9.383660326 podStartE2EDuration="11.030071022s" podCreationTimestamp="2025-07-07 06:04:55 +0000 UTC" firstStartedPulling="2025-07-07 06:04:55.688205478 +0000 UTC m=+7.685330634" lastFinishedPulling="2025-07-07 06:04:57.334616164 +0000 UTC m=+9.331741330" observedRunningTime="2025-07-07 06:04:58.170827771 +0000 UTC m=+10.167952937" watchObservedRunningTime="2025-07-07 06:05:06.030071022 +0000 UTC m=+18.027196178" Jul 7 06:05:06.040134 systemd[1]: Created slice kubepods-besteffort-pod9068eb3f_39c8_430e_9c4d_edab5270ee10.slice - libcontainer container kubepods-besteffort-pod9068eb3f_39c8_430e_9c4d_edab5270ee10.slice. Jul 7 06:05:06.091142 kubelet[2727]: I0707 06:05:06.091091 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9068eb3f-39c8-430e-9c4d-edab5270ee10-tigera-ca-bundle\") pod \"calico-typha-6ffc56f4b7-t8wx6\" (UID: \"9068eb3f-39c8-430e-9c4d-edab5270ee10\") " pod="calico-system/calico-typha-6ffc56f4b7-t8wx6" Jul 7 06:05:06.091142 kubelet[2727]: I0707 06:05:06.091131 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9068eb3f-39c8-430e-9c4d-edab5270ee10-typha-certs\") pod \"calico-typha-6ffc56f4b7-t8wx6\" (UID: \"9068eb3f-39c8-430e-9c4d-edab5270ee10\") " pod="calico-system/calico-typha-6ffc56f4b7-t8wx6" Jul 7 06:05:06.091537 kubelet[2727]: I0707 06:05:06.091156 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqjcb\" (UniqueName: \"kubernetes.io/projected/9068eb3f-39c8-430e-9c4d-edab5270ee10-kube-api-access-sqjcb\") pod \"calico-typha-6ffc56f4b7-t8wx6\" (UID: \"9068eb3f-39c8-430e-9c4d-edab5270ee10\") " pod="calico-system/calico-typha-6ffc56f4b7-t8wx6" Jul 7 06:05:06.346402 kubelet[2727]: E0707 06:05:06.345852 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:06.348472 containerd[1544]: time="2025-07-07T06:05:06.348352983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ffc56f4b7-t8wx6,Uid:9068eb3f-39c8-430e-9c4d-edab5270ee10,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:06.374141 containerd[1544]: time="2025-07-07T06:05:06.373857279Z" level=info msg="connecting to shim f11a1fbd0fffc891e51559a4d54e0ec80d6d026e8b5c4756046d03ea9d422863" address="unix:///run/containerd/s/213e311371145de3fb79785ba3f5de27f46c295fb67efbb899addb00427727f0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:06.413853 systemd[1]: Started cri-containerd-f11a1fbd0fffc891e51559a4d54e0ec80d6d026e8b5c4756046d03ea9d422863.scope - libcontainer container f11a1fbd0fffc891e51559a4d54e0ec80d6d026e8b5c4756046d03ea9d422863. Jul 7 06:05:06.437455 systemd[1]: Created slice kubepods-besteffort-pod9e80bce8_56a4_4fcc_9254_02747946412c.slice - libcontainer container kubepods-besteffort-pod9e80bce8_56a4_4fcc_9254_02747946412c.slice. Jul 7 06:05:06.494140 kubelet[2727]: I0707 06:05:06.494099 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-var-lib-calico\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494140 kubelet[2727]: I0707 06:05:06.494134 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-flexvol-driver-host\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494270 kubelet[2727]: I0707 06:05:06.494153 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-cni-bin-dir\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494270 kubelet[2727]: I0707 06:05:06.494167 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-xtables-lock\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494270 kubelet[2727]: I0707 06:05:06.494180 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-cni-log-dir\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494270 kubelet[2727]: I0707 06:05:06.494192 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-lib-modules\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494270 kubelet[2727]: I0707 06:05:06.494205 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9e80bce8-56a4-4fcc-9254-02747946412c-node-certs\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494437 kubelet[2727]: I0707 06:05:06.494216 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-policysync\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494437 kubelet[2727]: I0707 06:05:06.494229 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e80bce8-56a4-4fcc-9254-02747946412c-tigera-ca-bundle\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494437 kubelet[2727]: I0707 06:05:06.494246 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-cni-net-dir\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494437 kubelet[2727]: I0707 06:05:06.494258 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkw8p\" (UniqueName: \"kubernetes.io/projected/9e80bce8-56a4-4fcc-9254-02747946412c-kube-api-access-wkw8p\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.494437 kubelet[2727]: I0707 06:05:06.494271 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9e80bce8-56a4-4fcc-9254-02747946412c-var-run-calico\") pod \"calico-node-vnlm8\" (UID: \"9e80bce8-56a4-4fcc-9254-02747946412c\") " pod="calico-system/calico-node-vnlm8" Jul 7 06:05:06.503374 containerd[1544]: time="2025-07-07T06:05:06.503343673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ffc56f4b7-t8wx6,Uid:9068eb3f-39c8-430e-9c4d-edab5270ee10,Namespace:calico-system,Attempt:0,} returns sandbox id \"f11a1fbd0fffc891e51559a4d54e0ec80d6d026e8b5c4756046d03ea9d422863\"" Jul 7 06:05:06.504418 kubelet[2727]: E0707 06:05:06.504398 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:06.506294 containerd[1544]: time="2025-07-07T06:05:06.506181020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:05:06.601071 kubelet[2727]: E0707 06:05:06.600978 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.601237 kubelet[2727]: W0707 06:05:06.601133 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.601237 kubelet[2727]: E0707 06:05:06.601153 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.608190 kubelet[2727]: E0707 06:05:06.608162 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.608190 kubelet[2727]: W0707 06:05:06.608176 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.608190 kubelet[2727]: E0707 06:05:06.608191 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.712399 kubelet[2727]: E0707 06:05:06.711975 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjft" podUID="699001e3-2bb8-49d9-b0d8-60e5a17aecbb" Jul 7 06:05:06.750791 containerd[1544]: time="2025-07-07T06:05:06.750759819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vnlm8,Uid:9e80bce8-56a4-4fcc-9254-02747946412c,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:06.767907 containerd[1544]: time="2025-07-07T06:05:06.767867745Z" level=info msg="connecting to shim 00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43" address="unix:///run/containerd/s/d16469237b34dfd63504fb868f2836a1b2818f08c43735ef35581b9f5707c374" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:06.792598 kubelet[2727]: E0707 06:05:06.792213 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.792598 kubelet[2727]: W0707 06:05:06.792423 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.792598 kubelet[2727]: E0707 06:05:06.792439 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.795530 kubelet[2727]: E0707 06:05:06.792757 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.795530 kubelet[2727]: W0707 06:05:06.792769 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.795530 kubelet[2727]: E0707 06:05:06.792780 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.795530 kubelet[2727]: E0707 06:05:06.793885 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.795530 kubelet[2727]: W0707 06:05:06.793893 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.795530 kubelet[2727]: E0707 06:05:06.793902 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.795530 kubelet[2727]: E0707 06:05:06.794341 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.795530 kubelet[2727]: W0707 06:05:06.794367 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.795530 kubelet[2727]: E0707 06:05:06.794376 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.795530 kubelet[2727]: E0707 06:05:06.795067 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.795834 kubelet[2727]: W0707 06:05:06.795076 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.795834 kubelet[2727]: E0707 06:05:06.795084 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.795834 kubelet[2727]: E0707 06:05:06.795300 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.795834 kubelet[2727]: W0707 06:05:06.795307 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.795834 kubelet[2727]: E0707 06:05:06.795315 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.795834 kubelet[2727]: E0707 06:05:06.795661 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.795834 kubelet[2727]: W0707 06:05:06.795688 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.795834 kubelet[2727]: E0707 06:05:06.795733 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.796205 kubelet[2727]: E0707 06:05:06.796009 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.796205 kubelet[2727]: W0707 06:05:06.796022 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.796205 kubelet[2727]: E0707 06:05:06.796030 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.796738 kubelet[2727]: E0707 06:05:06.796588 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.796738 kubelet[2727]: W0707 06:05:06.796599 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.796738 kubelet[2727]: E0707 06:05:06.796608 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.797014 kubelet[2727]: E0707 06:05:06.796983 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.797014 kubelet[2727]: W0707 06:05:06.796999 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.797068 kubelet[2727]: E0707 06:05:06.797042 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.800822 kubelet[2727]: E0707 06:05:06.800763 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.800822 kubelet[2727]: W0707 06:05:06.800778 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.800822 kubelet[2727]: E0707 06:05:06.800787 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.801046 kubelet[2727]: E0707 06:05:06.801012 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.801046 kubelet[2727]: W0707 06:05:06.801026 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.801433 kubelet[2727]: E0707 06:05:06.801053 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.801433 kubelet[2727]: E0707 06:05:06.801427 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.801472 kubelet[2727]: W0707 06:05:06.801435 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.801472 kubelet[2727]: E0707 06:05:06.801443 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.802197 kubelet[2727]: E0707 06:05:06.801755 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.802197 kubelet[2727]: W0707 06:05:06.801768 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.802197 kubelet[2727]: E0707 06:05:06.801775 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.801921 systemd[1]: Started cri-containerd-00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43.scope - libcontainer container 00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43. Jul 7 06:05:06.802907 kubelet[2727]: E0707 06:05:06.802884 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.802907 kubelet[2727]: W0707 06:05:06.802897 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.802907 kubelet[2727]: E0707 06:05:06.802906 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.803192 kubelet[2727]: E0707 06:05:06.803093 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.803192 kubelet[2727]: W0707 06:05:06.803124 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.803192 kubelet[2727]: E0707 06:05:06.803131 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.804207 kubelet[2727]: E0707 06:05:06.804085 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.804207 kubelet[2727]: W0707 06:05:06.804098 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.804207 kubelet[2727]: E0707 06:05:06.804107 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.804931 kubelet[2727]: E0707 06:05:06.804576 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.804931 kubelet[2727]: W0707 06:05:06.804592 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.804931 kubelet[2727]: E0707 06:05:06.804600 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.805418 kubelet[2727]: E0707 06:05:06.805395 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.805465 kubelet[2727]: W0707 06:05:06.805427 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.805465 kubelet[2727]: E0707 06:05:06.805436 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.805789 kubelet[2727]: E0707 06:05:06.805598 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.805789 kubelet[2727]: W0707 06:05:06.805605 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.805789 kubelet[2727]: E0707 06:05:06.805612 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.806116 kubelet[2727]: E0707 06:05:06.806048 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.806116 kubelet[2727]: W0707 06:05:06.806061 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.806116 kubelet[2727]: E0707 06:05:06.806068 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.806116 kubelet[2727]: I0707 06:05:06.806084 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/699001e3-2bb8-49d9-b0d8-60e5a17aecbb-kubelet-dir\") pod \"csi-node-driver-wxjft\" (UID: \"699001e3-2bb8-49d9-b0d8-60e5a17aecbb\") " pod="calico-system/csi-node-driver-wxjft" Jul 7 06:05:06.806621 kubelet[2727]: E0707 06:05:06.806411 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.806621 kubelet[2727]: W0707 06:05:06.806420 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.806621 kubelet[2727]: E0707 06:05:06.806428 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.806621 kubelet[2727]: I0707 06:05:06.806501 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/699001e3-2bb8-49d9-b0d8-60e5a17aecbb-varrun\") pod \"csi-node-driver-wxjft\" (UID: \"699001e3-2bb8-49d9-b0d8-60e5a17aecbb\") " pod="calico-system/csi-node-driver-wxjft" Jul 7 06:05:06.807313 kubelet[2727]: E0707 06:05:06.807049 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.807313 kubelet[2727]: W0707 06:05:06.807303 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.807313 kubelet[2727]: E0707 06:05:06.807312 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.807381 kubelet[2727]: I0707 06:05:06.807337 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99bjg\" (UniqueName: \"kubernetes.io/projected/699001e3-2bb8-49d9-b0d8-60e5a17aecbb-kube-api-access-99bjg\") pod \"csi-node-driver-wxjft\" (UID: \"699001e3-2bb8-49d9-b0d8-60e5a17aecbb\") " pod="calico-system/csi-node-driver-wxjft" Jul 7 06:05:06.807675 kubelet[2727]: E0707 06:05:06.807576 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.807675 kubelet[2727]: W0707 06:05:06.807588 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.807675 kubelet[2727]: E0707 06:05:06.807606 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.808168 kubelet[2727]: E0707 06:05:06.808083 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.808168 kubelet[2727]: W0707 06:05:06.808097 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.808168 kubelet[2727]: E0707 06:05:06.808106 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.809632 kubelet[2727]: E0707 06:05:06.809609 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.809632 kubelet[2727]: W0707 06:05:06.809628 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.809805 kubelet[2727]: E0707 06:05:06.809636 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.810821 kubelet[2727]: E0707 06:05:06.810800 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.811014 kubelet[2727]: W0707 06:05:06.810890 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.811014 kubelet[2727]: E0707 06:05:06.810926 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.811014 kubelet[2727]: I0707 06:05:06.810998 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/699001e3-2bb8-49d9-b0d8-60e5a17aecbb-socket-dir\") pod \"csi-node-driver-wxjft\" (UID: \"699001e3-2bb8-49d9-b0d8-60e5a17aecbb\") " pod="calico-system/csi-node-driver-wxjft" Jul 7 06:05:06.812260 kubelet[2727]: E0707 06:05:06.812247 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.812318 kubelet[2727]: W0707 06:05:06.812307 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.812362 kubelet[2727]: E0707 06:05:06.812352 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.812660 kubelet[2727]: E0707 06:05:06.812614 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.812660 kubelet[2727]: W0707 06:05:06.812633 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.812660 kubelet[2727]: E0707 06:05:06.812643 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.813012 kubelet[2727]: E0707 06:05:06.812994 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.813012 kubelet[2727]: W0707 06:05:06.813007 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.813115 kubelet[2727]: E0707 06:05:06.813016 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.813603 kubelet[2727]: E0707 06:05:06.813584 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.813603 kubelet[2727]: W0707 06:05:06.813597 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.813662 kubelet[2727]: E0707 06:05:06.813606 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.814016 kubelet[2727]: E0707 06:05:06.813973 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.814126 kubelet[2727]: W0707 06:05:06.813985 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.814126 kubelet[2727]: E0707 06:05:06.814109 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.814990 kubelet[2727]: E0707 06:05:06.814951 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.815088 kubelet[2727]: W0707 06:05:06.815074 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.815211 kubelet[2727]: E0707 06:05:06.815199 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.815514 kubelet[2727]: I0707 06:05:06.815474 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/699001e3-2bb8-49d9-b0d8-60e5a17aecbb-registration-dir\") pod \"csi-node-driver-wxjft\" (UID: \"699001e3-2bb8-49d9-b0d8-60e5a17aecbb\") " pod="calico-system/csi-node-driver-wxjft" Jul 7 06:05:06.816533 kubelet[2727]: E0707 06:05:06.816457 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.816533 kubelet[2727]: W0707 06:05:06.816470 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.816533 kubelet[2727]: E0707 06:05:06.816480 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.816956 kubelet[2727]: E0707 06:05:06.816914 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.817038 kubelet[2727]: W0707 06:05:06.816992 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.817083 kubelet[2727]: E0707 06:05:06.817006 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.858148 containerd[1544]: time="2025-07-07T06:05:06.856465903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vnlm8,Uid:9e80bce8-56a4-4fcc-9254-02747946412c,Namespace:calico-system,Attempt:0,} returns sandbox id \"00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43\"" Jul 7 06:05:06.916254 kubelet[2727]: E0707 06:05:06.916209 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.916254 kubelet[2727]: W0707 06:05:06.916230 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.916254 kubelet[2727]: E0707 06:05:06.916248 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.916489 kubelet[2727]: E0707 06:05:06.916474 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.916524 kubelet[2727]: W0707 06:05:06.916513 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.916524 kubelet[2727]: E0707 06:05:06.916523 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.916954 kubelet[2727]: E0707 06:05:06.916933 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.916954 kubelet[2727]: W0707 06:05:06.916953 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.917026 kubelet[2727]: E0707 06:05:06.916962 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.917209 kubelet[2727]: E0707 06:05:06.917194 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.917209 kubelet[2727]: W0707 06:05:06.917206 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.917268 kubelet[2727]: E0707 06:05:06.917214 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.917462 kubelet[2727]: E0707 06:05:06.917447 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.917462 kubelet[2727]: W0707 06:05:06.917459 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.917462 kubelet[2727]: E0707 06:05:06.917468 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.918053 kubelet[2727]: E0707 06:05:06.918038 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.918053 kubelet[2727]: W0707 06:05:06.918049 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.918124 kubelet[2727]: E0707 06:05:06.918058 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.918301 kubelet[2727]: E0707 06:05:06.918286 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.918301 kubelet[2727]: W0707 06:05:06.918297 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.918401 kubelet[2727]: E0707 06:05:06.918305 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.918693 kubelet[2727]: E0707 06:05:06.918679 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.918693 kubelet[2727]: W0707 06:05:06.918690 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.918772 kubelet[2727]: E0707 06:05:06.918718 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.918963 kubelet[2727]: E0707 06:05:06.918948 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.918963 kubelet[2727]: W0707 06:05:06.918959 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.919054 kubelet[2727]: E0707 06:05:06.918967 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.919167 kubelet[2727]: E0707 06:05:06.919153 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.919167 kubelet[2727]: W0707 06:05:06.919164 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.919306 kubelet[2727]: E0707 06:05:06.919171 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.919334 kubelet[2727]: E0707 06:05:06.919324 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.919334 kubelet[2727]: W0707 06:05:06.919331 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.919419 kubelet[2727]: E0707 06:05:06.919338 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.919496 kubelet[2727]: E0707 06:05:06.919479 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.919496 kubelet[2727]: W0707 06:05:06.919491 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.919651 kubelet[2727]: E0707 06:05:06.919499 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.919679 kubelet[2727]: E0707 06:05:06.919665 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.919679 kubelet[2727]: W0707 06:05:06.919672 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.919679 kubelet[2727]: E0707 06:05:06.919679 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.919947 kubelet[2727]: E0707 06:05:06.919933 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.919947 kubelet[2727]: W0707 06:05:06.919944 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.919993 kubelet[2727]: E0707 06:05:06.919952 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.920211 kubelet[2727]: E0707 06:05:06.920194 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.920211 kubelet[2727]: W0707 06:05:06.920207 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.920312 kubelet[2727]: E0707 06:05:06.920215 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.920977 kubelet[2727]: E0707 06:05:06.920960 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.920977 kubelet[2727]: W0707 06:05:06.920973 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.921025 kubelet[2727]: E0707 06:05:06.920982 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.921208 kubelet[2727]: E0707 06:05:06.921189 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.921208 kubelet[2727]: W0707 06:05:06.921204 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.921275 kubelet[2727]: E0707 06:05:06.921212 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.921421 kubelet[2727]: E0707 06:05:06.921397 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.921421 kubelet[2727]: W0707 06:05:06.921413 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.921421 kubelet[2727]: E0707 06:05:06.921420 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.921798 kubelet[2727]: E0707 06:05:06.921580 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.921798 kubelet[2727]: W0707 06:05:06.921587 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.921798 kubelet[2727]: E0707 06:05:06.921594 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.921990 kubelet[2727]: E0707 06:05:06.921969 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.921990 kubelet[2727]: W0707 06:05:06.921981 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.921990 kubelet[2727]: E0707 06:05:06.921989 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.923733 kubelet[2727]: E0707 06:05:06.922499 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.923733 kubelet[2727]: W0707 06:05:06.922519 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.923733 kubelet[2727]: E0707 06:05:06.922530 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.923733 kubelet[2727]: E0707 06:05:06.922844 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.923733 kubelet[2727]: W0707 06:05:06.922852 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.923733 kubelet[2727]: E0707 06:05:06.922859 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.923733 kubelet[2727]: E0707 06:05:06.923117 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.923733 kubelet[2727]: W0707 06:05:06.923125 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.923733 kubelet[2727]: E0707 06:05:06.923132 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.923733 kubelet[2727]: E0707 06:05:06.923561 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.923935 kubelet[2727]: W0707 06:05:06.923568 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.923935 kubelet[2727]: E0707 06:05:06.923576 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.923935 kubelet[2727]: E0707 06:05:06.923827 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.923935 kubelet[2727]: W0707 06:05:06.923835 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.923935 kubelet[2727]: E0707 06:05:06.923842 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:06.929441 kubelet[2727]: E0707 06:05:06.929420 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:06.929441 kubelet[2727]: W0707 06:05:06.929432 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:06.929441 kubelet[2727]: E0707 06:05:06.929441 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:07.243491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234487630.mount: Deactivated successfully. Jul 7 06:05:08.009413 containerd[1544]: time="2025-07-07T06:05:08.009186268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.010204 containerd[1544]: time="2025-07-07T06:05:08.010171792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:05:08.011077 containerd[1544]: time="2025-07-07T06:05:08.011040745Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.012812 containerd[1544]: time="2025-07-07T06:05:08.012760972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.013288 containerd[1544]: time="2025-07-07T06:05:08.013126069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.5069086s" Jul 7 06:05:08.013288 containerd[1544]: time="2025-07-07T06:05:08.013152609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:05:08.014784 containerd[1544]: time="2025-07-07T06:05:08.014765467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:05:08.032596 containerd[1544]: time="2025-07-07T06:05:08.032565344Z" level=info msg="CreateContainer within sandbox \"f11a1fbd0fffc891e51559a4d54e0ec80d6d026e8b5c4756046d03ea9d422863\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:05:08.038062 containerd[1544]: time="2025-07-07T06:05:08.038020435Z" level=info msg="Container c18aae21aa4cab31765debc087ffc964b57a3fc0191796d23a792ee6fec8c776: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:08.043014 containerd[1544]: time="2025-07-07T06:05:08.042983637Z" level=info msg="CreateContainer within sandbox \"f11a1fbd0fffc891e51559a4d54e0ec80d6d026e8b5c4756046d03ea9d422863\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c18aae21aa4cab31765debc087ffc964b57a3fc0191796d23a792ee6fec8c776\"" Jul 7 06:05:08.043976 containerd[1544]: time="2025-07-07T06:05:08.043904071Z" level=info msg="StartContainer for \"c18aae21aa4cab31765debc087ffc964b57a3fc0191796d23a792ee6fec8c776\"" Jul 7 06:05:08.045381 containerd[1544]: time="2025-07-07T06:05:08.045314470Z" level=info msg="connecting to shim c18aae21aa4cab31765debc087ffc964b57a3fc0191796d23a792ee6fec8c776" address="unix:///run/containerd/s/213e311371145de3fb79785ba3f5de27f46c295fb67efbb899addb00427727f0" protocol=ttrpc version=3 Jul 7 06:05:08.064817 systemd[1]: Started cri-containerd-c18aae21aa4cab31765debc087ffc964b57a3fc0191796d23a792ee6fec8c776.scope - libcontainer container c18aae21aa4cab31765debc087ffc964b57a3fc0191796d23a792ee6fec8c776. Jul 7 06:05:08.121375 containerd[1544]: time="2025-07-07T06:05:08.121342095Z" level=info msg="StartContainer for \"c18aae21aa4cab31765debc087ffc964b57a3fc0191796d23a792ee6fec8c776\" returns successfully" Jul 7 06:05:08.201933 kubelet[2727]: E0707 06:05:08.201900 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:08.217875 kubelet[2727]: E0707 06:05:08.217843 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.217968 kubelet[2727]: W0707 06:05:08.217866 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.217968 kubelet[2727]: E0707 06:05:08.217906 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.218271 kubelet[2727]: E0707 06:05:08.218248 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.218271 kubelet[2727]: W0707 06:05:08.218264 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.218460 kubelet[2727]: E0707 06:05:08.218421 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.220232 kubelet[2727]: E0707 06:05:08.219625 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.220232 kubelet[2727]: W0707 06:05:08.220230 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.220792 kubelet[2727]: E0707 06:05:08.220244 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.221221 kubelet[2727]: E0707 06:05:08.221169 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.221221 kubelet[2727]: W0707 06:05:08.221183 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.222264 kubelet[2727]: E0707 06:05:08.222238 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.222470 kubelet[2727]: E0707 06:05:08.222448 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.222470 kubelet[2727]: W0707 06:05:08.222463 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.222739 kubelet[2727]: E0707 06:05:08.222649 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.223396 kubelet[2727]: E0707 06:05:08.223368 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.223511 kubelet[2727]: W0707 06:05:08.223385 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.223511 kubelet[2727]: E0707 06:05:08.223506 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.224573 kubelet[2727]: E0707 06:05:08.224455 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.224573 kubelet[2727]: W0707 06:05:08.224469 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.224573 kubelet[2727]: E0707 06:05:08.224477 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.225102 kubelet[2727]: E0707 06:05:08.225083 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.225102 kubelet[2727]: W0707 06:05:08.225096 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.225168 kubelet[2727]: E0707 06:05:08.225106 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.226591 kubelet[2727]: E0707 06:05:08.226406 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.226591 kubelet[2727]: W0707 06:05:08.226439 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.226591 kubelet[2727]: E0707 06:05:08.226460 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.227063 kubelet[2727]: E0707 06:05:08.227014 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.227063 kubelet[2727]: W0707 06:05:08.227028 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.227299 kubelet[2727]: E0707 06:05:08.227037 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.227501 kubelet[2727]: E0707 06:05:08.227460 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.227598 kubelet[2727]: W0707 06:05:08.227586 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.227674 kubelet[2727]: E0707 06:05:08.227663 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.227961 kubelet[2727]: E0707 06:05:08.227949 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.228108 kubelet[2727]: W0707 06:05:08.228000 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.228108 kubelet[2727]: E0707 06:05:08.228012 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.228246 kubelet[2727]: E0707 06:05:08.228235 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.228303 kubelet[2727]: W0707 06:05:08.228292 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.228344 kubelet[2727]: E0707 06:05:08.228335 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.228626 kubelet[2727]: E0707 06:05:08.228520 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.228626 kubelet[2727]: W0707 06:05:08.228530 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.228626 kubelet[2727]: E0707 06:05:08.228537 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.228833 kubelet[2727]: E0707 06:05:08.228822 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.228897 kubelet[2727]: W0707 06:05:08.228886 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.228943 kubelet[2727]: E0707 06:05:08.228934 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.229275 kubelet[2727]: E0707 06:05:08.229187 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.229275 kubelet[2727]: W0707 06:05:08.229197 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.229275 kubelet[2727]: E0707 06:05:08.229205 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.229425 kubelet[2727]: E0707 06:05:08.229414 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.229478 kubelet[2727]: W0707 06:05:08.229468 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.229519 kubelet[2727]: E0707 06:05:08.229510 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.229870 kubelet[2727]: E0707 06:05:08.229760 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.229870 kubelet[2727]: W0707 06:05:08.229771 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.229870 kubelet[2727]: E0707 06:05:08.229779 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.230034 kubelet[2727]: E0707 06:05:08.230023 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.230092 kubelet[2727]: W0707 06:05:08.230081 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.230139 kubelet[2727]: E0707 06:05:08.230128 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.230382 kubelet[2727]: E0707 06:05:08.230304 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.230382 kubelet[2727]: W0707 06:05:08.230313 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.230382 kubelet[2727]: E0707 06:05:08.230321 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.230537 kubelet[2727]: E0707 06:05:08.230527 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.230592 kubelet[2727]: W0707 06:05:08.230582 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.230631 kubelet[2727]: E0707 06:05:08.230623 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.231071 kubelet[2727]: E0707 06:05:08.230841 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.231071 kubelet[2727]: W0707 06:05:08.230851 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.231071 kubelet[2727]: E0707 06:05:08.230859 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.231220 kubelet[2727]: E0707 06:05:08.231210 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.231279 kubelet[2727]: W0707 06:05:08.231269 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.231319 kubelet[2727]: E0707 06:05:08.231310 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.231500 kubelet[2727]: E0707 06:05:08.231489 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.231546 kubelet[2727]: W0707 06:05:08.231537 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.231584 kubelet[2727]: E0707 06:05:08.231576 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.231780 kubelet[2727]: E0707 06:05:08.231769 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.231840 kubelet[2727]: W0707 06:05:08.231829 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.231880 kubelet[2727]: E0707 06:05:08.231871 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.232128 kubelet[2727]: E0707 06:05:08.232040 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.232128 kubelet[2727]: W0707 06:05:08.232050 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.232128 kubelet[2727]: E0707 06:05:08.232058 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.232291 kubelet[2727]: E0707 06:05:08.232280 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.232343 kubelet[2727]: W0707 06:05:08.232333 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.232715 kubelet[2727]: E0707 06:05:08.232378 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.232954 kubelet[2727]: E0707 06:05:08.232943 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.232999 kubelet[2727]: W0707 06:05:08.232990 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.233039 kubelet[2727]: E0707 06:05:08.233030 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.233207 kubelet[2727]: E0707 06:05:08.233197 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.233266 kubelet[2727]: W0707 06:05:08.233255 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.233307 kubelet[2727]: E0707 06:05:08.233297 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.233471 kubelet[2727]: E0707 06:05:08.233461 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.233517 kubelet[2727]: W0707 06:05:08.233508 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.233556 kubelet[2727]: E0707 06:05:08.233547 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.233774 kubelet[2727]: E0707 06:05:08.233763 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.233834 kubelet[2727]: W0707 06:05:08.233823 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.233874 kubelet[2727]: E0707 06:05:08.233866 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.234148 kubelet[2727]: E0707 06:05:08.234138 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.234196 kubelet[2727]: W0707 06:05:08.234187 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.234244 kubelet[2727]: E0707 06:05:08.234234 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.234719 kubelet[2727]: E0707 06:05:08.234460 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:08.234719 kubelet[2727]: W0707 06:05:08.234470 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:08.234719 kubelet[2727]: E0707 06:05:08.234477 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:08.810995 containerd[1544]: time="2025-07-07T06:05:08.810957721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.811514 containerd[1544]: time="2025-07-07T06:05:08.811493398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:05:08.812001 containerd[1544]: time="2025-07-07T06:05:08.811963134Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.813747 containerd[1544]: time="2025-07-07T06:05:08.813171955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.814068 containerd[1544]: time="2025-07-07T06:05:08.814042989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 799.195273ms" Jul 7 06:05:08.814322 containerd[1544]: time="2025-07-07T06:05:08.814307957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:05:08.818021 containerd[1544]: time="2025-07-07T06:05:08.817988870Z" level=info msg="CreateContainer within sandbox \"00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:05:08.827607 containerd[1544]: time="2025-07-07T06:05:08.824331233Z" level=info msg="Container 27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:08.834993 containerd[1544]: time="2025-07-07T06:05:08.834956193Z" level=info msg="CreateContainer within sandbox \"00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d\"" Jul 7 06:05:08.835807 containerd[1544]: time="2025-07-07T06:05:08.835778157Z" level=info msg="StartContainer for \"27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d\"" Jul 7 06:05:08.838209 containerd[1544]: time="2025-07-07T06:05:08.838152230Z" level=info msg="connecting to shim 27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d" address="unix:///run/containerd/s/d16469237b34dfd63504fb868f2836a1b2818f08c43735ef35581b9f5707c374" protocol=ttrpc version=3 Jul 7 06:05:08.862830 systemd[1]: Started cri-containerd-27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d.scope - libcontainer container 27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d. Jul 7 06:05:08.910554 containerd[1544]: time="2025-07-07T06:05:08.910506812Z" level=info msg="StartContainer for \"27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d\" returns successfully" Jul 7 06:05:08.923847 systemd[1]: cri-containerd-27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d.scope: Deactivated successfully. Jul 7 06:05:08.926064 containerd[1544]: time="2025-07-07T06:05:08.926033557Z" level=info msg="received exit event container_id:\"27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d\" id:\"27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d\" pid:3419 exited_at:{seconds:1751868308 nanos:925630200}" Jul 7 06:05:08.926853 containerd[1544]: time="2025-07-07T06:05:08.926097637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d\" id:\"27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d\" pid:3419 exited_at:{seconds:1751868308 nanos:925630200}" Jul 7 06:05:08.955465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27b74744c562ae50a2b5f50266d0a85ffd9e139b347a4ba02e64be044f0cd01d-rootfs.mount: Deactivated successfully. Jul 7 06:05:09.113564 kubelet[2727]: E0707 06:05:09.113426 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjft" podUID="699001e3-2bb8-49d9-b0d8-60e5a17aecbb" Jul 7 06:05:09.215339 kubelet[2727]: I0707 06:05:09.215203 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:09.216508 kubelet[2727]: E0707 06:05:09.216481 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:09.218283 containerd[1544]: time="2025-07-07T06:05:09.217978552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:05:09.237115 kubelet[2727]: I0707 06:05:09.236981 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6ffc56f4b7-t8wx6" podStartSLOduration=1.728657368 podStartE2EDuration="3.236957498s" podCreationTimestamp="2025-07-07 06:05:06 +0000 UTC" firstStartedPulling="2025-07-07 06:05:06.505584424 +0000 UTC m=+18.502709590" lastFinishedPulling="2025-07-07 06:05:08.013884554 +0000 UTC m=+20.011009720" observedRunningTime="2025-07-07 06:05:08.234686963 +0000 UTC m=+20.231812119" watchObservedRunningTime="2025-07-07 06:05:09.236957498 +0000 UTC m=+21.234082654" Jul 7 06:05:11.113633 kubelet[2727]: E0707 06:05:11.113579 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjft" podUID="699001e3-2bb8-49d9-b0d8-60e5a17aecbb" Jul 7 06:05:11.398874 containerd[1544]: time="2025-07-07T06:05:11.398834373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:11.399877 containerd[1544]: time="2025-07-07T06:05:11.399683238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:05:11.400800 containerd[1544]: time="2025-07-07T06:05:11.400775191Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:11.404619 containerd[1544]: time="2025-07-07T06:05:11.404594007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:11.406747 containerd[1544]: time="2025-07-07T06:05:11.406722704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.188689142s" Jul 7 06:05:11.406747 containerd[1544]: time="2025-07-07T06:05:11.406744424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:05:11.409619 containerd[1544]: time="2025-07-07T06:05:11.409594286Z" level=info msg="CreateContainer within sandbox \"00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:05:11.421796 containerd[1544]: time="2025-07-07T06:05:11.420810996Z" level=info msg="Container dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:11.434224 containerd[1544]: time="2025-07-07T06:05:11.434195083Z" level=info msg="CreateContainer within sandbox \"00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67\"" Jul 7 06:05:11.435766 containerd[1544]: time="2025-07-07T06:05:11.435324546Z" level=info msg="StartContainer for \"dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67\"" Jul 7 06:05:11.436682 containerd[1544]: time="2025-07-07T06:05:11.436653728Z" level=info msg="connecting to shim dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67" address="unix:///run/containerd/s/d16469237b34dfd63504fb868f2836a1b2818f08c43735ef35581b9f5707c374" protocol=ttrpc version=3 Jul 7 06:05:11.460830 systemd[1]: Started cri-containerd-dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67.scope - libcontainer container dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67. Jul 7 06:05:11.513208 containerd[1544]: time="2025-07-07T06:05:11.513135483Z" level=info msg="StartContainer for \"dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67\" returns successfully" Jul 7 06:05:11.995425 containerd[1544]: time="2025-07-07T06:05:11.995348839Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:05:11.998095 systemd[1]: cri-containerd-dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67.scope: Deactivated successfully. Jul 7 06:05:11.998579 systemd[1]: cri-containerd-dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67.scope: Consumed 508ms CPU time, 198.1M memory peak, 171.2M written to disk. Jul 7 06:05:12.000175 containerd[1544]: time="2025-07-07T06:05:12.000146288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67\" id:\"dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67\" pid:3478 exited_at:{seconds:1751868311 nanos:999559332}" Jul 7 06:05:12.000221 containerd[1544]: time="2025-07-07T06:05:12.000201588Z" level=info msg="received exit event container_id:\"dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67\" id:\"dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67\" pid:3478 exited_at:{seconds:1751868311 nanos:999559332}" Jul 7 06:05:12.012564 kubelet[2727]: I0707 06:05:12.012070 2727 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:05:12.030282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc32d56cc111a6d3433e4d1ef7caf6635fcceb00f5149da47bc648fa42b2da67-rootfs.mount: Deactivated successfully. Jul 7 06:05:12.079083 systemd[1]: Created slice kubepods-burstable-podad6c7672_9663_4bb5_81e2_682e5b9ee692.slice - libcontainer container kubepods-burstable-podad6c7672_9663_4bb5_81e2_682e5b9ee692.slice. Jul 7 06:05:12.096008 systemd[1]: Created slice kubepods-besteffort-podfcbcd94f_005f_4449_b484_da9c6d7cab63.slice - libcontainer container kubepods-besteffort-podfcbcd94f_005f_4449_b484_da9c6d7cab63.slice. Jul 7 06:05:12.125430 systemd[1]: Created slice kubepods-burstable-pod95a5639a_5c75_4697_b28e_c61f7dab6169.slice - libcontainer container kubepods-burstable-pod95a5639a_5c75_4697_b28e_c61f7dab6169.slice. Jul 7 06:05:12.145477 systemd[1]: Created slice kubepods-besteffort-pod42e67864_1946_4b93_ab86_d2296d74f6ca.slice - libcontainer container kubepods-besteffort-pod42e67864_1946_4b93_ab86_d2296d74f6ca.slice. Jul 7 06:05:12.156016 systemd[1]: Created slice kubepods-besteffort-pod660f5dd9_b647_4594_92a6_e946605f47e3.slice - libcontainer container kubepods-besteffort-pod660f5dd9_b647_4594_92a6_e946605f47e3.slice. Jul 7 06:05:12.160043 kubelet[2727]: I0707 06:05:12.160015 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk47q\" (UniqueName: \"kubernetes.io/projected/95a5639a-5c75-4697-b28e-c61f7dab6169-kube-api-access-mk47q\") pod \"coredns-674b8bbfcf-vzz9g\" (UID: \"95a5639a-5c75-4697-b28e-c61f7dab6169\") " pod="kube-system/coredns-674b8bbfcf-vzz9g" Jul 7 06:05:12.160043 kubelet[2727]: I0707 06:05:12.160048 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad6c7672-9663-4bb5-81e2-682e5b9ee692-config-volume\") pod \"coredns-674b8bbfcf-tzg8k\" (UID: \"ad6c7672-9663-4bb5-81e2-682e5b9ee692\") " pod="kube-system/coredns-674b8bbfcf-tzg8k" Jul 7 06:05:12.161563 kubelet[2727]: I0707 06:05:12.160064 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhcw\" (UniqueName: \"kubernetes.io/projected/ad6c7672-9663-4bb5-81e2-682e5b9ee692-kube-api-access-4xhcw\") pod \"coredns-674b8bbfcf-tzg8k\" (UID: \"ad6c7672-9663-4bb5-81e2-682e5b9ee692\") " pod="kube-system/coredns-674b8bbfcf-tzg8k" Jul 7 06:05:12.161563 kubelet[2727]: I0707 06:05:12.160079 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a5639a-5c75-4697-b28e-c61f7dab6169-config-volume\") pod \"coredns-674b8bbfcf-vzz9g\" (UID: \"95a5639a-5c75-4697-b28e-c61f7dab6169\") " pod="kube-system/coredns-674b8bbfcf-vzz9g" Jul 7 06:05:12.166623 systemd[1]: Created slice kubepods-besteffort-pod5584af6e_842e_4fc9_84d1_f6b0c3bfe672.slice - libcontainer container kubepods-besteffort-pod5584af6e_842e_4fc9_84d1_f6b0c3bfe672.slice. Jul 7 06:05:12.176369 systemd[1]: Created slice kubepods-besteffort-pode38fc636_275d_4334_8ad6_800c7cc4fa05.slice - libcontainer container kubepods-besteffort-pode38fc636_275d_4334_8ad6_800c7cc4fa05.slice. Jul 7 06:05:12.223913 containerd[1544]: time="2025-07-07T06:05:12.223787029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:05:12.261178 kubelet[2727]: I0707 06:05:12.260845 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660f5dd9-b647-4594-92a6-e946605f47e3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-9jrmp\" (UID: \"660f5dd9-b647-4594-92a6-e946605f47e3\") " pod="calico-system/goldmane-768f4c5c69-9jrmp" Jul 7 06:05:12.261178 kubelet[2727]: I0707 06:05:12.260880 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-ca-bundle\") pod \"whisker-57df47b4b-zcfmd\" (UID: \"fcbcd94f-005f-4449-b484-da9c6d7cab63\") " pod="calico-system/whisker-57df47b4b-zcfmd" Jul 7 06:05:12.261178 kubelet[2727]: I0707 06:05:12.260898 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42e67864-1946-4b93-ab86-d2296d74f6ca-calico-apiserver-certs\") pod \"calico-apiserver-789d8b94bc-596hn\" (UID: \"42e67864-1946-4b93-ab86-d2296d74f6ca\") " pod="calico-apiserver/calico-apiserver-789d8b94bc-596hn" Jul 7 06:05:12.261178 kubelet[2727]: I0707 06:05:12.260917 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhbwt\" (UniqueName: \"kubernetes.io/projected/5584af6e-842e-4fc9-84d1-f6b0c3bfe672-kube-api-access-dhbwt\") pod \"calico-kube-controllers-75c5d6d7d5-mwflw\" (UID: \"5584af6e-842e-4fc9-84d1-f6b0c3bfe672\") " pod="calico-system/calico-kube-controllers-75c5d6d7d5-mwflw" Jul 7 06:05:12.261178 kubelet[2727]: I0707 06:05:12.260934 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4zp\" (UniqueName: \"kubernetes.io/projected/660f5dd9-b647-4594-92a6-e946605f47e3-kube-api-access-8g4zp\") pod \"goldmane-768f4c5c69-9jrmp\" (UID: \"660f5dd9-b647-4594-92a6-e946605f47e3\") " pod="calico-system/goldmane-768f4c5c69-9jrmp" Jul 7 06:05:12.261361 kubelet[2727]: I0707 06:05:12.260953 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5584af6e-842e-4fc9-84d1-f6b0c3bfe672-tigera-ca-bundle\") pod \"calico-kube-controllers-75c5d6d7d5-mwflw\" (UID: \"5584af6e-842e-4fc9-84d1-f6b0c3bfe672\") " pod="calico-system/calico-kube-controllers-75c5d6d7d5-mwflw" Jul 7 06:05:12.261361 kubelet[2727]: I0707 06:05:12.260969 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-backend-key-pair\") pod \"whisker-57df47b4b-zcfmd\" (UID: \"fcbcd94f-005f-4449-b484-da9c6d7cab63\") " pod="calico-system/whisker-57df47b4b-zcfmd" Jul 7 06:05:12.261361 kubelet[2727]: I0707 06:05:12.260986 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42t9j\" (UniqueName: \"kubernetes.io/projected/e38fc636-275d-4334-8ad6-800c7cc4fa05-kube-api-access-42t9j\") pod \"calico-apiserver-789d8b94bc-85w6d\" (UID: \"e38fc636-275d-4334-8ad6-800c7cc4fa05\") " pod="calico-apiserver/calico-apiserver-789d8b94bc-85w6d" Jul 7 06:05:12.261361 kubelet[2727]: I0707 06:05:12.261010 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/660f5dd9-b647-4594-92a6-e946605f47e3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-9jrmp\" (UID: \"660f5dd9-b647-4594-92a6-e946605f47e3\") " pod="calico-system/goldmane-768f4c5c69-9jrmp" Jul 7 06:05:12.261361 kubelet[2727]: I0707 06:05:12.261035 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4v7\" (UniqueName: \"kubernetes.io/projected/fcbcd94f-005f-4449-b484-da9c6d7cab63-kube-api-access-kb4v7\") pod \"whisker-57df47b4b-zcfmd\" (UID: \"fcbcd94f-005f-4449-b484-da9c6d7cab63\") " pod="calico-system/whisker-57df47b4b-zcfmd" Jul 7 06:05:12.261462 kubelet[2727]: I0707 06:05:12.261052 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e38fc636-275d-4334-8ad6-800c7cc4fa05-calico-apiserver-certs\") pod \"calico-apiserver-789d8b94bc-85w6d\" (UID: \"e38fc636-275d-4334-8ad6-800c7cc4fa05\") " pod="calico-apiserver/calico-apiserver-789d8b94bc-85w6d" Jul 7 06:05:12.261462 kubelet[2727]: I0707 06:05:12.261067 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/660f5dd9-b647-4594-92a6-e946605f47e3-config\") pod \"goldmane-768f4c5c69-9jrmp\" (UID: \"660f5dd9-b647-4594-92a6-e946605f47e3\") " pod="calico-system/goldmane-768f4c5c69-9jrmp" Jul 7 06:05:12.261462 kubelet[2727]: I0707 06:05:12.261105 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsc9\" (UniqueName: \"kubernetes.io/projected/42e67864-1946-4b93-ab86-d2296d74f6ca-kube-api-access-qfsc9\") pod \"calico-apiserver-789d8b94bc-596hn\" (UID: \"42e67864-1946-4b93-ab86-d2296d74f6ca\") " pod="calico-apiserver/calico-apiserver-789d8b94bc-596hn" Jul 7 06:05:12.394847 kubelet[2727]: E0707 06:05:12.392916 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:12.395076 containerd[1544]: time="2025-07-07T06:05:12.394577160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzg8k,Uid:ad6c7672-9663-4bb5-81e2-682e5b9ee692,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:12.407079 containerd[1544]: time="2025-07-07T06:05:12.407029957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57df47b4b-zcfmd,Uid:fcbcd94f-005f-4449-b484-da9c6d7cab63,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:12.437052 kubelet[2727]: E0707 06:05:12.436989 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:12.438993 containerd[1544]: time="2025-07-07T06:05:12.438956750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vzz9g,Uid:95a5639a-5c75-4697-b28e-c61f7dab6169,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:12.456487 containerd[1544]: time="2025-07-07T06:05:12.456459148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-596hn,Uid:42e67864-1946-4b93-ab86-d2296d74f6ca,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:05:12.467304 containerd[1544]: time="2025-07-07T06:05:12.467004206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9jrmp,Uid:660f5dd9-b647-4594-92a6-e946605f47e3,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:12.476121 containerd[1544]: time="2025-07-07T06:05:12.476101712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c5d6d7d5-mwflw,Uid:5584af6e-842e-4fc9-84d1-f6b0c3bfe672,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:12.488546 containerd[1544]: time="2025-07-07T06:05:12.488426840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-85w6d,Uid:e38fc636-275d-4334-8ad6-800c7cc4fa05,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:05:12.634934 containerd[1544]: time="2025-07-07T06:05:12.634686994Z" level=error msg="Failed to destroy network for sandbox \"ee79aeff5bca5737118736324e780402924b7c661aec7fcc0700cc44fe810119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.637421 containerd[1544]: time="2025-07-07T06:05:12.637350889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzg8k,Uid:ad6c7672-9663-4bb5-81e2-682e5b9ee692,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee79aeff5bca5737118736324e780402924b7c661aec7fcc0700cc44fe810119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.639617 kubelet[2727]: E0707 06:05:12.637976 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee79aeff5bca5737118736324e780402924b7c661aec7fcc0700cc44fe810119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.639617 kubelet[2727]: E0707 06:05:12.638080 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee79aeff5bca5737118736324e780402924b7c661aec7fcc0700cc44fe810119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tzg8k" Jul 7 06:05:12.639617 kubelet[2727]: E0707 06:05:12.638104 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee79aeff5bca5737118736324e780402924b7c661aec7fcc0700cc44fe810119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tzg8k" Jul 7 06:05:12.639851 kubelet[2727]: E0707 06:05:12.638178 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tzg8k_kube-system(ad6c7672-9663-4bb5-81e2-682e5b9ee692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tzg8k_kube-system(ad6c7672-9663-4bb5-81e2-682e5b9ee692)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee79aeff5bca5737118736324e780402924b7c661aec7fcc0700cc44fe810119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tzg8k" podUID="ad6c7672-9663-4bb5-81e2-682e5b9ee692" Jul 7 06:05:12.639909 containerd[1544]: time="2025-07-07T06:05:12.639865084Z" level=error msg="Failed to destroy network for sandbox \"724acac8bc60b586f4d3d5bc43b1da3a763ea4015d1d79fa5e9608d6f7d57f2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.642216 containerd[1544]: time="2025-07-07T06:05:12.642180670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57df47b4b-zcfmd,Uid:fcbcd94f-005f-4449-b484-da9c6d7cab63,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"724acac8bc60b586f4d3d5bc43b1da3a763ea4015d1d79fa5e9608d6f7d57f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.644531 kubelet[2727]: E0707 06:05:12.642775 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"724acac8bc60b586f4d3d5bc43b1da3a763ea4015d1d79fa5e9608d6f7d57f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.644531 kubelet[2727]: E0707 06:05:12.642885 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"724acac8bc60b586f4d3d5bc43b1da3a763ea4015d1d79fa5e9608d6f7d57f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57df47b4b-zcfmd" Jul 7 06:05:12.644531 kubelet[2727]: E0707 06:05:12.642990 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"724acac8bc60b586f4d3d5bc43b1da3a763ea4015d1d79fa5e9608d6f7d57f2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57df47b4b-zcfmd" Jul 7 06:05:12.644617 kubelet[2727]: E0707 06:05:12.643525 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57df47b4b-zcfmd_calico-system(fcbcd94f-005f-4449-b484-da9c6d7cab63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57df47b4b-zcfmd_calico-system(fcbcd94f-005f-4449-b484-da9c6d7cab63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"724acac8bc60b586f4d3d5bc43b1da3a763ea4015d1d79fa5e9608d6f7d57f2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57df47b4b-zcfmd" podUID="fcbcd94f-005f-4449-b484-da9c6d7cab63" Jul 7 06:05:12.672342 containerd[1544]: time="2025-07-07T06:05:12.672308384Z" level=error msg="Failed to destroy network for sandbox \"7f15641aa373cc1bc67268e30cc1c50dff1e8f185f4aa8dbc5962428ff55f08d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.679561 containerd[1544]: time="2025-07-07T06:05:12.678938706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-596hn,Uid:42e67864-1946-4b93-ab86-d2296d74f6ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f15641aa373cc1bc67268e30cc1c50dff1e8f185f4aa8dbc5962428ff55f08d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.679886 kubelet[2727]: E0707 06:05:12.679854 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f15641aa373cc1bc67268e30cc1c50dff1e8f185f4aa8dbc5962428ff55f08d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.680961 kubelet[2727]: E0707 06:05:12.680622 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f15641aa373cc1bc67268e30cc1c50dff1e8f185f4aa8dbc5962428ff55f08d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-789d8b94bc-596hn" Jul 7 06:05:12.680961 kubelet[2727]: E0707 06:05:12.680651 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f15641aa373cc1bc67268e30cc1c50dff1e8f185f4aa8dbc5962428ff55f08d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-789d8b94bc-596hn" Jul 7 06:05:12.680961 kubelet[2727]: E0707 06:05:12.680722 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-789d8b94bc-596hn_calico-apiserver(42e67864-1946-4b93-ab86-d2296d74f6ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-789d8b94bc-596hn_calico-apiserver(42e67864-1946-4b93-ab86-d2296d74f6ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f15641aa373cc1bc67268e30cc1c50dff1e8f185f4aa8dbc5962428ff55f08d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-789d8b94bc-596hn" podUID="42e67864-1946-4b93-ab86-d2296d74f6ca" Jul 7 06:05:12.691338 containerd[1544]: time="2025-07-07T06:05:12.691307363Z" level=error msg="Failed to destroy network for sandbox \"f885f3a0be975812b34efbd7e780eda051c6139e50f3eebb65829c787141daf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.693504 containerd[1544]: time="2025-07-07T06:05:12.693477300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vzz9g,Uid:95a5639a-5c75-4697-b28e-c61f7dab6169,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f885f3a0be975812b34efbd7e780eda051c6139e50f3eebb65829c787141daf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.694677 kubelet[2727]: E0707 06:05:12.694653 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f885f3a0be975812b34efbd7e780eda051c6139e50f3eebb65829c787141daf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.695011 kubelet[2727]: E0707 06:05:12.694808 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f885f3a0be975812b34efbd7e780eda051c6139e50f3eebb65829c787141daf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vzz9g" Jul 7 06:05:12.696555 kubelet[2727]: E0707 06:05:12.695082 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f885f3a0be975812b34efbd7e780eda051c6139e50f3eebb65829c787141daf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vzz9g" Jul 7 06:05:12.696555 kubelet[2727]: E0707 06:05:12.695131 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vzz9g_kube-system(95a5639a-5c75-4697-b28e-c61f7dab6169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vzz9g_kube-system(95a5639a-5c75-4697-b28e-c61f7dab6169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f885f3a0be975812b34efbd7e780eda051c6139e50f3eebb65829c787141daf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vzz9g" podUID="95a5639a-5c75-4697-b28e-c61f7dab6169" Jul 7 06:05:12.706079 containerd[1544]: time="2025-07-07T06:05:12.706044167Z" level=error msg="Failed to destroy network for sandbox \"51b74c358998a53e94cff8264bf612d6901eeae6ea9496ecc7c2bef2ad68ed2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.707476 containerd[1544]: time="2025-07-07T06:05:12.707451378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-85w6d,Uid:e38fc636-275d-4334-8ad6-800c7cc4fa05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b74c358998a53e94cff8264bf612d6901eeae6ea9496ecc7c2bef2ad68ed2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.707764 kubelet[2727]: E0707 06:05:12.707745 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b74c358998a53e94cff8264bf612d6901eeae6ea9496ecc7c2bef2ad68ed2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.707844 kubelet[2727]: E0707 06:05:12.707830 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b74c358998a53e94cff8264bf612d6901eeae6ea9496ecc7c2bef2ad68ed2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-789d8b94bc-85w6d" Jul 7 06:05:12.707906 kubelet[2727]: E0707 06:05:12.707889 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b74c358998a53e94cff8264bf612d6901eeae6ea9496ecc7c2bef2ad68ed2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-789d8b94bc-85w6d" Jul 7 06:05:12.707992 kubelet[2727]: E0707 06:05:12.707972 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-789d8b94bc-85w6d_calico-apiserver(e38fc636-275d-4334-8ad6-800c7cc4fa05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-789d8b94bc-85w6d_calico-apiserver(e38fc636-275d-4334-8ad6-800c7cc4fa05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51b74c358998a53e94cff8264bf612d6901eeae6ea9496ecc7c2bef2ad68ed2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-789d8b94bc-85w6d" podUID="e38fc636-275d-4334-8ad6-800c7cc4fa05" Jul 7 06:05:12.708854 containerd[1544]: time="2025-07-07T06:05:12.708820241Z" level=error msg="Failed to destroy network for sandbox \"4bfbc490dab31940736c48cdde0d6752cfb9bdb4d54a1664e61ddad2bb70dcda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.710778 containerd[1544]: time="2025-07-07T06:05:12.710665050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c5d6d7d5-mwflw,Uid:5584af6e-842e-4fc9-84d1-f6b0c3bfe672,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bfbc490dab31940736c48cdde0d6752cfb9bdb4d54a1664e61ddad2bb70dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.711032 kubelet[2727]: E0707 06:05:12.710915 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bfbc490dab31940736c48cdde0d6752cfb9bdb4d54a1664e61ddad2bb70dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.711109 kubelet[2727]: E0707 06:05:12.711093 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bfbc490dab31940736c48cdde0d6752cfb9bdb4d54a1664e61ddad2bb70dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c5d6d7d5-mwflw" Jul 7 06:05:12.711356 kubelet[2727]: E0707 06:05:12.711153 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bfbc490dab31940736c48cdde0d6752cfb9bdb4d54a1664e61ddad2bb70dcda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c5d6d7d5-mwflw" Jul 7 06:05:12.711356 kubelet[2727]: E0707 06:05:12.711190 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75c5d6d7d5-mwflw_calico-system(5584af6e-842e-4fc9-84d1-f6b0c3bfe672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75c5d6d7d5-mwflw_calico-system(5584af6e-842e-4fc9-84d1-f6b0c3bfe672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bfbc490dab31940736c48cdde0d6752cfb9bdb4d54a1664e61ddad2bb70dcda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75c5d6d7d5-mwflw" podUID="5584af6e-842e-4fc9-84d1-f6b0c3bfe672" Jul 7 06:05:12.715546 containerd[1544]: time="2025-07-07T06:05:12.715501771Z" level=error msg="Failed to destroy network for sandbox \"47d85af8f0c33369e9ed02e84354a4a522b25724b47cef5804e99a67e5a52a8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.719606 containerd[1544]: time="2025-07-07T06:05:12.719568767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9jrmp,Uid:660f5dd9-b647-4594-92a6-e946605f47e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d85af8f0c33369e9ed02e84354a4a522b25724b47cef5804e99a67e5a52a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.719877 kubelet[2727]: E0707 06:05:12.719678 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d85af8f0c33369e9ed02e84354a4a522b25724b47cef5804e99a67e5a52a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:12.719877 kubelet[2727]: E0707 06:05:12.719766 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d85af8f0c33369e9ed02e84354a4a522b25724b47cef5804e99a67e5a52a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9jrmp" Jul 7 06:05:12.719877 kubelet[2727]: E0707 06:05:12.719785 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d85af8f0c33369e9ed02e84354a4a522b25724b47cef5804e99a67e5a52a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-9jrmp" Jul 7 06:05:12.720036 kubelet[2727]: E0707 06:05:12.719829 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-9jrmp_calico-system(660f5dd9-b647-4594-92a6-e946605f47e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-9jrmp_calico-system(660f5dd9-b647-4594-92a6-e946605f47e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47d85af8f0c33369e9ed02e84354a4a522b25724b47cef5804e99a67e5a52a8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-9jrmp" podUID="660f5dd9-b647-4594-92a6-e946605f47e3" Jul 7 06:05:13.121691 systemd[1]: Created slice kubepods-besteffort-pod699001e3_2bb8_49d9_b0d8_60e5a17aecbb.slice - libcontainer container kubepods-besteffort-pod699001e3_2bb8_49d9_b0d8_60e5a17aecbb.slice. Jul 7 06:05:13.125767 containerd[1544]: time="2025-07-07T06:05:13.125679252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjft,Uid:699001e3-2bb8-49d9-b0d8-60e5a17aecbb,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:13.187697 containerd[1544]: time="2025-07-07T06:05:13.187547051Z" level=error msg="Failed to destroy network for sandbox \"ce80600414c9e52e52f22279812ff56b13e9cebc761ad3ef20514827999f6a41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:13.189143 containerd[1544]: time="2025-07-07T06:05:13.189075063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjft,Uid:699001e3-2bb8-49d9-b0d8-60e5a17aecbb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce80600414c9e52e52f22279812ff56b13e9cebc761ad3ef20514827999f6a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:13.189716 kubelet[2727]: E0707 06:05:13.189654 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce80600414c9e52e52f22279812ff56b13e9cebc761ad3ef20514827999f6a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:05:13.190037 kubelet[2727]: E0707 06:05:13.189735 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce80600414c9e52e52f22279812ff56b13e9cebc761ad3ef20514827999f6a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxjft" Jul 7 06:05:13.190037 kubelet[2727]: E0707 06:05:13.189758 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce80600414c9e52e52f22279812ff56b13e9cebc761ad3ef20514827999f6a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxjft" Jul 7 06:05:13.190037 kubelet[2727]: E0707 06:05:13.189808 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wxjft_calico-system(699001e3-2bb8-49d9-b0d8-60e5a17aecbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wxjft_calico-system(699001e3-2bb8-49d9-b0d8-60e5a17aecbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce80600414c9e52e52f22279812ff56b13e9cebc761ad3ef20514827999f6a41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wxjft" podUID="699001e3-2bb8-49d9-b0d8-60e5a17aecbb" Jul 7 06:05:13.419259 systemd[1]: run-netns-cni\x2d9471d4b0\x2d4cb5\x2d8025\x2db4fe\x2d7d3515d5a63b.mount: Deactivated successfully. Jul 7 06:05:13.419399 systemd[1]: run-netns-cni\x2d19eb6565\x2dd5fa\x2d51d3\x2da938\x2dd7f5d43f2e79.mount: Deactivated successfully. Jul 7 06:05:13.419472 systemd[1]: run-netns-cni\x2dc2b85f1a\x2dfac4\x2da176\x2da2cf\x2dd36cfe5cd64e.mount: Deactivated successfully. Jul 7 06:05:13.419537 systemd[1]: run-netns-cni\x2d08ab5747\x2d2ff9\x2db673\x2dde26\x2d7908598a6062.mount: Deactivated successfully. Jul 7 06:05:13.419599 systemd[1]: run-netns-cni\x2d7013948b\x2d43f9\x2d5c1d\x2d1f14\x2d4d237e696594.mount: Deactivated successfully. Jul 7 06:05:13.419662 systemd[1]: run-netns-cni\x2dc1541562\x2df0e8\x2d4f7c\x2d07d2\x2d8f5502eee0c1.mount: Deactivated successfully. Jul 7 06:05:16.687948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003349770.mount: Deactivated successfully. Jul 7 06:05:16.719259 containerd[1544]: time="2025-07-07T06:05:16.719206448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:16.720052 containerd[1544]: time="2025-07-07T06:05:16.719919185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:05:16.720680 containerd[1544]: time="2025-07-07T06:05:16.720648901Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:16.722532 containerd[1544]: time="2025-07-07T06:05:16.722500642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:16.722981 containerd[1544]: time="2025-07-07T06:05:16.722949410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.499130521s" Jul 7 06:05:16.723057 containerd[1544]: time="2025-07-07T06:05:16.723043390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:05:16.747508 containerd[1544]: time="2025-07-07T06:05:16.747467287Z" level=info msg="CreateContainer within sandbox \"00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:05:16.755910 containerd[1544]: time="2025-07-07T06:05:16.755881108Z" level=info msg="Container f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:16.762784 containerd[1544]: time="2025-07-07T06:05:16.762750366Z" level=info msg="CreateContainer within sandbox \"00fb5c04824ab08cbef365e96938420a2a8140e0c2ec05983d1ffa3d75ed7b43\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\"" Jul 7 06:05:16.764474 containerd[1544]: time="2025-07-07T06:05:16.764419138Z" level=info msg="StartContainer for \"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\"" Jul 7 06:05:16.765667 containerd[1544]: time="2025-07-07T06:05:16.765631993Z" level=info msg="connecting to shim f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448" address="unix:///run/containerd/s/d16469237b34dfd63504fb868f2836a1b2818f08c43735ef35581b9f5707c374" protocol=ttrpc version=3 Jul 7 06:05:16.821823 systemd[1]: Started cri-containerd-f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448.scope - libcontainer container f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448. Jul 7 06:05:16.872670 containerd[1544]: time="2025-07-07T06:05:16.872646026Z" level=info msg="StartContainer for \"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" returns successfully" Jul 7 06:05:16.949659 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:05:16.949865 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:05:17.200687 kubelet[2727]: I0707 06:05:17.200641 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-backend-key-pair\") pod \"fcbcd94f-005f-4449-b484-da9c6d7cab63\" (UID: \"fcbcd94f-005f-4449-b484-da9c6d7cab63\") " Jul 7 06:05:17.200687 kubelet[2727]: I0707 06:05:17.200733 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-ca-bundle\") pod \"fcbcd94f-005f-4449-b484-da9c6d7cab63\" (UID: \"fcbcd94f-005f-4449-b484-da9c6d7cab63\") " Jul 7 06:05:17.200687 kubelet[2727]: I0707 06:05:17.200754 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4v7\" (UniqueName: \"kubernetes.io/projected/fcbcd94f-005f-4449-b484-da9c6d7cab63-kube-api-access-kb4v7\") pod \"fcbcd94f-005f-4449-b484-da9c6d7cab63\" (UID: \"fcbcd94f-005f-4449-b484-da9c6d7cab63\") " Jul 7 06:05:17.202548 kubelet[2727]: I0707 06:05:17.202504 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fcbcd94f-005f-4449-b484-da9c6d7cab63" (UID: "fcbcd94f-005f-4449-b484-da9c6d7cab63"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:05:17.217448 kubelet[2727]: I0707 06:05:17.217424 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcbcd94f-005f-4449-b484-da9c6d7cab63-kube-api-access-kb4v7" (OuterVolumeSpecName: "kube-api-access-kb4v7") pod "fcbcd94f-005f-4449-b484-da9c6d7cab63" (UID: "fcbcd94f-005f-4449-b484-da9c6d7cab63"). InnerVolumeSpecName "kube-api-access-kb4v7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:05:17.218018 kubelet[2727]: I0707 06:05:17.218002 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fcbcd94f-005f-4449-b484-da9c6d7cab63" (UID: "fcbcd94f-005f-4449-b484-da9c6d7cab63"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:05:17.249691 systemd[1]: Removed slice kubepods-besteffort-podfcbcd94f_005f_4449_b484_da9c6d7cab63.slice - libcontainer container kubepods-besteffort-podfcbcd94f_005f_4449_b484_da9c6d7cab63.slice. Jul 7 06:05:17.260407 kubelet[2727]: I0707 06:05:17.260346 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vnlm8" podStartSLOduration=1.395898817 podStartE2EDuration="11.260331966s" podCreationTimestamp="2025-07-07 06:05:06 +0000 UTC" firstStartedPulling="2025-07-07 06:05:06.859454127 +0000 UTC m=+18.856579283" lastFinishedPulling="2025-07-07 06:05:16.723887276 +0000 UTC m=+28.721012432" observedRunningTime="2025-07-07 06:05:17.259260272 +0000 UTC m=+29.256385448" watchObservedRunningTime="2025-07-07 06:05:17.260331966 +0000 UTC m=+29.257457132" Jul 7 06:05:17.301880 kubelet[2727]: I0707 06:05:17.301845 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-backend-key-pair\") on node \"172-236-119-245\" DevicePath \"\"" Jul 7 06:05:17.301880 kubelet[2727]: I0707 06:05:17.301870 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbcd94f-005f-4449-b484-da9c6d7cab63-whisker-ca-bundle\") on node \"172-236-119-245\" DevicePath \"\"" Jul 7 06:05:17.301880 kubelet[2727]: I0707 06:05:17.301881 2727 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kb4v7\" (UniqueName: \"kubernetes.io/projected/fcbcd94f-005f-4449-b484-da9c6d7cab63-kube-api-access-kb4v7\") on node \"172-236-119-245\" DevicePath \"\"" Jul 7 06:05:17.310731 systemd[1]: Created slice kubepods-besteffort-podbd11f7a6_6edb_4a40_81fe_86985726f066.slice - libcontainer container kubepods-besteffort-podbd11f7a6_6edb_4a40_81fe_86985726f066.slice. Jul 7 06:05:17.503632 kubelet[2727]: I0707 06:05:17.503436 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd11f7a6-6edb-4a40-81fe-86985726f066-whisker-backend-key-pair\") pod \"whisker-56cd7c5598-h5bkb\" (UID: \"bd11f7a6-6edb-4a40-81fe-86985726f066\") " pod="calico-system/whisker-56cd7c5598-h5bkb" Jul 7 06:05:17.503632 kubelet[2727]: I0707 06:05:17.503477 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd11f7a6-6edb-4a40-81fe-86985726f066-whisker-ca-bundle\") pod \"whisker-56cd7c5598-h5bkb\" (UID: \"bd11f7a6-6edb-4a40-81fe-86985726f066\") " pod="calico-system/whisker-56cd7c5598-h5bkb" Jul 7 06:05:17.503632 kubelet[2727]: I0707 06:05:17.503506 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v27j5\" (UniqueName: \"kubernetes.io/projected/bd11f7a6-6edb-4a40-81fe-86985726f066-kube-api-access-v27j5\") pod \"whisker-56cd7c5598-h5bkb\" (UID: \"bd11f7a6-6edb-4a40-81fe-86985726f066\") " pod="calico-system/whisker-56cd7c5598-h5bkb" Jul 7 06:05:17.616112 containerd[1544]: time="2025-07-07T06:05:17.616058450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56cd7c5598-h5bkb,Uid:bd11f7a6-6edb-4a40-81fe-86985726f066,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:17.694731 systemd[1]: var-lib-kubelet-pods-fcbcd94f\x2d005f\x2d4449\x2db484\x2dda9c6d7cab63-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkb4v7.mount: Deactivated successfully. Jul 7 06:05:17.695285 systemd[1]: var-lib-kubelet-pods-fcbcd94f\x2d005f\x2d4449\x2db484\x2dda9c6d7cab63-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:05:17.762431 systemd-networkd[1456]: caliaa5cfe72529: Link UP Jul 7 06:05:17.764101 systemd-networkd[1456]: caliaa5cfe72529: Gained carrier Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.641 [INFO][3802] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.680 [INFO][3802] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0 whisker-56cd7c5598- calico-system bd11f7a6-6edb-4a40-81fe-86985726f066 887 0 2025-07-07 06:05:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56cd7c5598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-236-119-245 whisker-56cd7c5598-h5bkb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaa5cfe72529 [] [] }} ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.680 [INFO][3802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.713 [INFO][3814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" HandleID="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Workload="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.713 [INFO][3814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" HandleID="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Workload="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-119-245", "pod":"whisker-56cd7c5598-h5bkb", "timestamp":"2025-07-07 06:05:17.713668603 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.714 [INFO][3814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.714 [INFO][3814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.714 [INFO][3814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.721 [INFO][3814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.730 [INFO][3814] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.734 [INFO][3814] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.735 [INFO][3814] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.737 [INFO][3814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.737 [INFO][3814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.738 [INFO][3814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361 Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.742 [INFO][3814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.747 [INFO][3814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.129/26] block=192.168.2.128/26 handle="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.747 [INFO][3814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.129/26] handle="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" host="172-236-119-245" Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.747 [INFO][3814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:17.783916 containerd[1544]: 2025-07-07 06:05:17.747 [INFO][3814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.129/26] IPv6=[] ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" HandleID="k8s-pod-network.33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Workload="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" Jul 7 06:05:17.785822 containerd[1544]: 2025-07-07 06:05:17.751 [INFO][3802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0", GenerateName:"whisker-56cd7c5598-", Namespace:"calico-system", SelfLink:"", UID:"bd11f7a6-6edb-4a40-81fe-86985726f066", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56cd7c5598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"whisker-56cd7c5598-h5bkb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa5cfe72529", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:17.785822 containerd[1544]: 2025-07-07 06:05:17.751 [INFO][3802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.129/32] ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" Jul 7 06:05:17.785822 containerd[1544]: 2025-07-07 06:05:17.752 [INFO][3802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa5cfe72529 ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" Jul 7 06:05:17.785822 containerd[1544]: 2025-07-07 06:05:17.765 [INFO][3802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" Jul 7 06:05:17.785822 containerd[1544]: 2025-07-07 06:05:17.766 [INFO][3802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0", GenerateName:"whisker-56cd7c5598-", Namespace:"calico-system", SelfLink:"", UID:"bd11f7a6-6edb-4a40-81fe-86985726f066", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56cd7c5598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361", Pod:"whisker-56cd7c5598-h5bkb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa5cfe72529", MAC:"be:48:81:60:fe:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:17.785822 containerd[1544]: 2025-07-07 06:05:17.777 [INFO][3802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" Namespace="calico-system" Pod="whisker-56cd7c5598-h5bkb" WorkloadEndpoint="172--236--119--245-k8s-whisker--56cd7c5598--h5bkb-eth0" Jul 7 06:05:17.822208 containerd[1544]: time="2025-07-07T06:05:17.822158337Z" level=info msg="connecting to shim 33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361" address="unix:///run/containerd/s/b73d9d53dfa662162c05aa121971fcd460358fcab2297909e94a870ad0d79955" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:17.854824 systemd[1]: Started cri-containerd-33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361.scope - libcontainer container 33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361. Jul 7 06:05:17.907642 containerd[1544]: time="2025-07-07T06:05:17.907595904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56cd7c5598-h5bkb,Uid:bd11f7a6-6edb-4a40-81fe-86985726f066,Namespace:calico-system,Attempt:0,} returns sandbox id \"33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361\"" Jul 7 06:05:17.910200 containerd[1544]: time="2025-07-07T06:05:17.909653925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:05:18.116542 kubelet[2727]: I0707 06:05:18.116408 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcbcd94f-005f-4449-b484-da9c6d7cab63" path="/var/lib/kubelet/pods/fcbcd94f-005f-4449-b484-da9c6d7cab63/volumes" Jul 7 06:05:18.244494 kubelet[2727]: I0707 06:05:18.244453 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:19.437238 containerd[1544]: time="2025-07-07T06:05:19.437184866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:19.440653 containerd[1544]: time="2025-07-07T06:05:19.439450037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:05:19.440653 containerd[1544]: time="2025-07-07T06:05:19.439529377Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:19.441310 containerd[1544]: time="2025-07-07T06:05:19.441277620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:19.442049 containerd[1544]: time="2025-07-07T06:05:19.442026327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.532343752s" Jul 7 06:05:19.442119 containerd[1544]: time="2025-07-07T06:05:19.442106586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:05:19.458786 containerd[1544]: time="2025-07-07T06:05:19.458761912Z" level=info msg="CreateContainer within sandbox \"33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:05:19.466870 containerd[1544]: time="2025-07-07T06:05:19.466847570Z" level=info msg="Container cc3b772428a248f1ffd4d91de268ebc0447231e767f541493b6f85b85b9a67cb: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:19.485563 containerd[1544]: time="2025-07-07T06:05:19.485392798Z" level=info msg="CreateContainer within sandbox \"33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cc3b772428a248f1ffd4d91de268ebc0447231e767f541493b6f85b85b9a67cb\"" Jul 7 06:05:19.486893 containerd[1544]: time="2025-07-07T06:05:19.486858212Z" level=info msg="StartContainer for \"cc3b772428a248f1ffd4d91de268ebc0447231e767f541493b6f85b85b9a67cb\"" Jul 7 06:05:19.488470 containerd[1544]: time="2025-07-07T06:05:19.488437336Z" level=info msg="connecting to shim cc3b772428a248f1ffd4d91de268ebc0447231e767f541493b6f85b85b9a67cb" address="unix:///run/containerd/s/b73d9d53dfa662162c05aa121971fcd460358fcab2297909e94a870ad0d79955" protocol=ttrpc version=3 Jul 7 06:05:19.515830 systemd[1]: Started cri-containerd-cc3b772428a248f1ffd4d91de268ebc0447231e767f541493b6f85b85b9a67cb.scope - libcontainer container cc3b772428a248f1ffd4d91de268ebc0447231e767f541493b6f85b85b9a67cb. Jul 7 06:05:19.612808 containerd[1544]: time="2025-07-07T06:05:19.612743220Z" level=info msg="StartContainer for \"cc3b772428a248f1ffd4d91de268ebc0447231e767f541493b6f85b85b9a67cb\" returns successfully" Jul 7 06:05:19.615130 containerd[1544]: time="2025-07-07T06:05:19.615086301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:05:19.629987 systemd-networkd[1456]: caliaa5cfe72529: Gained IPv6LL Jul 7 06:05:20.920782 kubelet[2727]: I0707 06:05:20.920513 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:21.036455 containerd[1544]: time="2025-07-07T06:05:21.036372325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" id:\"ef28fa8d1551ef0944daffba31b551a125e9edd26d4dcaa588e37d0c310762ae\" pid:4062 exit_status:1 exited_at:{seconds:1751868321 nanos:35086170}" Jul 7 06:05:21.159301 containerd[1544]: time="2025-07-07T06:05:21.159233286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" id:\"be768f3948072b13979e888fc6b8caf19d20686d9a6f64f059bf60448cb179ab\" pid:4085 exit_status:1 exited_at:{seconds:1751868321 nanos:158957737}" Jul 7 06:05:21.319452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940858712.mount: Deactivated successfully. Jul 7 06:05:21.330137 containerd[1544]: time="2025-07-07T06:05:21.330105888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:21.330952 containerd[1544]: time="2025-07-07T06:05:21.330919706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:05:21.331680 containerd[1544]: time="2025-07-07T06:05:21.331641123Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:21.333238 containerd[1544]: time="2025-07-07T06:05:21.333204398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:21.334109 containerd[1544]: time="2025-07-07T06:05:21.333779155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.718665754s" Jul 7 06:05:21.334109 containerd[1544]: time="2025-07-07T06:05:21.333803095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:05:21.338769 containerd[1544]: time="2025-07-07T06:05:21.338741718Z" level=info msg="CreateContainer within sandbox \"33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:05:21.348536 containerd[1544]: time="2025-07-07T06:05:21.347961456Z" level=info msg="Container d2d0798081a9a934e1c234cbb62d6d573837a0940b64a4d3772000498ee8d023: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:21.360456 containerd[1544]: time="2025-07-07T06:05:21.360425022Z" level=info msg="CreateContainer within sandbox \"33da6a91d025c4357f264a7fa1af6e7bc93642453641497fcad74d6abb35c361\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d2d0798081a9a934e1c234cbb62d6d573837a0940b64a4d3772000498ee8d023\"" Jul 7 06:05:21.360960 containerd[1544]: time="2025-07-07T06:05:21.360902341Z" level=info msg="StartContainer for \"d2d0798081a9a934e1c234cbb62d6d573837a0940b64a4d3772000498ee8d023\"" Jul 7 06:05:21.362473 containerd[1544]: time="2025-07-07T06:05:21.362443035Z" level=info msg="connecting to shim d2d0798081a9a934e1c234cbb62d6d573837a0940b64a4d3772000498ee8d023" address="unix:///run/containerd/s/b73d9d53dfa662162c05aa121971fcd460358fcab2297909e94a870ad0d79955" protocol=ttrpc version=3 Jul 7 06:05:21.386833 systemd[1]: Started cri-containerd-d2d0798081a9a934e1c234cbb62d6d573837a0940b64a4d3772000498ee8d023.scope - libcontainer container d2d0798081a9a934e1c234cbb62d6d573837a0940b64a4d3772000498ee8d023. Jul 7 06:05:21.434651 containerd[1544]: time="2025-07-07T06:05:21.434607393Z" level=info msg="StartContainer for \"d2d0798081a9a934e1c234cbb62d6d573837a0940b64a4d3772000498ee8d023\" returns successfully" Jul 7 06:05:22.278002 kubelet[2727]: I0707 06:05:22.276877 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56cd7c5598-h5bkb" podStartSLOduration=1.8515525350000002 podStartE2EDuration="5.276853882s" podCreationTimestamp="2025-07-07 06:05:17 +0000 UTC" firstStartedPulling="2025-07-07 06:05:17.909320246 +0000 UTC m=+29.906445402" lastFinishedPulling="2025-07-07 06:05:21.334621593 +0000 UTC m=+33.331746749" observedRunningTime="2025-07-07 06:05:22.276066484 +0000 UTC m=+34.273191640" watchObservedRunningTime="2025-07-07 06:05:22.276853882 +0000 UTC m=+34.273979038" Jul 7 06:05:23.114728 kubelet[2727]: E0707 06:05:23.113939 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:23.115888 containerd[1544]: time="2025-07-07T06:05:23.115861217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzg8k,Uid:ad6c7672-9663-4bb5-81e2-682e5b9ee692,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:23.235625 systemd-networkd[1456]: calidb19015b013: Link UP Jul 7 06:05:23.236876 systemd-networkd[1456]: calidb19015b013: Gained carrier Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.148 [INFO][4182] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.163 [INFO][4182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0 coredns-674b8bbfcf- kube-system ad6c7672-9663-4bb5-81e2-682e5b9ee692 811 0 2025-07-07 06:04:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-119-245 coredns-674b8bbfcf-tzg8k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb19015b013 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.164 [INFO][4182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.195 [INFO][4194] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" HandleID="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Workload="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.196 [INFO][4194] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" HandleID="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Workload="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-119-245", "pod":"coredns-674b8bbfcf-tzg8k", "timestamp":"2025-07-07 06:05:23.195842706 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.196 [INFO][4194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.196 [INFO][4194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.196 [INFO][4194] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.203 [INFO][4194] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.209 [INFO][4194] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.214 [INFO][4194] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.216 [INFO][4194] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.217 [INFO][4194] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.218 [INFO][4194] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.219 [INFO][4194] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.223 [INFO][4194] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.229 [INFO][4194] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.130/26] block=192.168.2.128/26 handle="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.229 [INFO][4194] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.130/26] handle="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" host="172-236-119-245" Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.229 [INFO][4194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:23.250904 containerd[1544]: 2025-07-07 06:05:23.229 [INFO][4194] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.130/26] IPv6=[] ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" HandleID="k8s-pod-network.3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Workload="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" Jul 7 06:05:23.251469 containerd[1544]: 2025-07-07 06:05:23.232 [INFO][4182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ad6c7672-9663-4bb5-81e2-682e5b9ee692", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"coredns-674b8bbfcf-tzg8k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb19015b013", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:23.251469 containerd[1544]: 2025-07-07 06:05:23.232 [INFO][4182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.130/32] ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" Jul 7 06:05:23.251469 containerd[1544]: 2025-07-07 06:05:23.232 [INFO][4182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb19015b013 ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" Jul 7 06:05:23.251469 containerd[1544]: 2025-07-07 06:05:23.236 [INFO][4182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" Jul 7 06:05:23.251597 containerd[1544]: 2025-07-07 06:05:23.236 [INFO][4182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ad6c7672-9663-4bb5-81e2-682e5b9ee692", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e", Pod:"coredns-674b8bbfcf-tzg8k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb19015b013", MAC:"ba:a9:e6:e7:15:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:23.251597 containerd[1544]: 2025-07-07 06:05:23.244 [INFO][4182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzg8k" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--tzg8k-eth0" Jul 7 06:05:23.280002 containerd[1544]: time="2025-07-07T06:05:23.279271204Z" level=info msg="connecting to shim 3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e" address="unix:///run/containerd/s/0738a149c96f1831978290207071da2d7b55514448521c283bc205e424e0b472" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:23.309827 systemd[1]: Started cri-containerd-3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e.scope - libcontainer container 3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e. Jul 7 06:05:23.363415 containerd[1544]: time="2025-07-07T06:05:23.363370221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzg8k,Uid:ad6c7672-9663-4bb5-81e2-682e5b9ee692,Namespace:kube-system,Attempt:0,} returns sandbox id \"3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e\"" Jul 7 06:05:23.364416 kubelet[2727]: E0707 06:05:23.364384 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:23.369184 containerd[1544]: time="2025-07-07T06:05:23.369090703Z" level=info msg="CreateContainer within sandbox \"3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:05:23.379756 containerd[1544]: time="2025-07-07T06:05:23.378525834Z" level=info msg="Container 584b03d6c78cc08ba3e30ba3a51ad94d2d6f58c2e3fe03aa9bfcee0596f0b2bc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:23.382400 containerd[1544]: time="2025-07-07T06:05:23.382368462Z" level=info msg="CreateContainer within sandbox \"3895dc030d06d36c7c1593f2fd66700f98da851daaf774506ca68418e796051e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"584b03d6c78cc08ba3e30ba3a51ad94d2d6f58c2e3fe03aa9bfcee0596f0b2bc\"" Jul 7 06:05:23.382956 containerd[1544]: time="2025-07-07T06:05:23.382889520Z" level=info msg="StartContainer for \"584b03d6c78cc08ba3e30ba3a51ad94d2d6f58c2e3fe03aa9bfcee0596f0b2bc\"" Jul 7 06:05:23.383643 containerd[1544]: time="2025-07-07T06:05:23.383595068Z" level=info msg="connecting to shim 584b03d6c78cc08ba3e30ba3a51ad94d2d6f58c2e3fe03aa9bfcee0596f0b2bc" address="unix:///run/containerd/s/0738a149c96f1831978290207071da2d7b55514448521c283bc205e424e0b472" protocol=ttrpc version=3 Jul 7 06:05:23.408835 systemd[1]: Started cri-containerd-584b03d6c78cc08ba3e30ba3a51ad94d2d6f58c2e3fe03aa9bfcee0596f0b2bc.scope - libcontainer container 584b03d6c78cc08ba3e30ba3a51ad94d2d6f58c2e3fe03aa9bfcee0596f0b2bc. Jul 7 06:05:23.435610 containerd[1544]: time="2025-07-07T06:05:23.435539645Z" level=info msg="StartContainer for \"584b03d6c78cc08ba3e30ba3a51ad94d2d6f58c2e3fe03aa9bfcee0596f0b2bc\" returns successfully" Jul 7 06:05:24.270575 kubelet[2727]: E0707 06:05:24.270543 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:24.307278 kubelet[2727]: I0707 06:05:24.307227 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tzg8k" podStartSLOduration=29.307211016 podStartE2EDuration="29.307211016s" podCreationTimestamp="2025-07-07 06:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:24.296576687 +0000 UTC m=+36.293701853" watchObservedRunningTime="2025-07-07 06:05:24.307211016 +0000 UTC m=+36.304336172" Jul 7 06:05:24.875913 systemd-networkd[1456]: calidb19015b013: Gained IPv6LL Jul 7 06:05:25.114438 kubelet[2727]: E0707 06:05:25.114278 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:25.115332 containerd[1544]: time="2025-07-07T06:05:25.115287396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vzz9g,Uid:95a5639a-5c75-4697-b28e-c61f7dab6169,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:25.116073 containerd[1544]: time="2025-07-07T06:05:25.115612265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9jrmp,Uid:660f5dd9-b647-4594-92a6-e946605f47e3,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:25.274648 kubelet[2727]: E0707 06:05:25.273868 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:25.305092 systemd-networkd[1456]: cali7680bb52b96: Link UP Jul 7 06:05:25.305933 systemd-networkd[1456]: cali7680bb52b96: Gained carrier Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.171 [INFO][4319] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.189 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0 coredns-674b8bbfcf- kube-system 95a5639a-5c75-4697-b28e-c61f7dab6169 818 0 2025-07-07 06:04:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-119-245 coredns-674b8bbfcf-vzz9g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7680bb52b96 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.189 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.243 [INFO][4353] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" HandleID="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Workload="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.243 [INFO][4353] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" HandleID="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Workload="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000323890), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-119-245", "pod":"coredns-674b8bbfcf-vzz9g", "timestamp":"2025-07-07 06:05:25.240553143 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.243 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.243 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.243 [INFO][4353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.253 [INFO][4353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.257 [INFO][4353] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.270 [INFO][4353] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.271 [INFO][4353] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.274 [INFO][4353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.275 [INFO][4353] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.277 [INFO][4353] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4 Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.281 [INFO][4353] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.289 [INFO][4353] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.131/26] block=192.168.2.128/26 handle="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.289 [INFO][4353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.131/26] handle="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" host="172-236-119-245" Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.289 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.323655 containerd[1544]: 2025-07-07 06:05:25.289 [INFO][4353] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.131/26] IPv6=[] ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" HandleID="k8s-pod-network.2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Workload="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" Jul 7 06:05:25.324440 containerd[1544]: 2025-07-07 06:05:25.295 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95a5639a-5c75-4697-b28e-c61f7dab6169", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"coredns-674b8bbfcf-vzz9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7680bb52b96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.324440 containerd[1544]: 2025-07-07 06:05:25.295 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.131/32] ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" Jul 7 06:05:25.324440 containerd[1544]: 2025-07-07 06:05:25.295 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7680bb52b96 ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" Jul 7 06:05:25.324440 containerd[1544]: 2025-07-07 06:05:25.307 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" Jul 7 06:05:25.324554 containerd[1544]: 2025-07-07 06:05:25.309 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95a5639a-5c75-4697-b28e-c61f7dab6169", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4", Pod:"coredns-674b8bbfcf-vzz9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7680bb52b96", MAC:"1e:71:6b:62:c2:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.324554 containerd[1544]: 2025-07-07 06:05:25.321 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vzz9g" WorkloadEndpoint="172--236--119--245-k8s-coredns--674b8bbfcf--vzz9g-eth0" Jul 7 06:05:25.341028 containerd[1544]: time="2025-07-07T06:05:25.340999871Z" level=info msg="connecting to shim 2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4" address="unix:///run/containerd/s/54939a84b499e72a7829506f7fe1b294a20805b282d6de33a71597e367f01753" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:25.368983 systemd[1]: Started cri-containerd-2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4.scope - libcontainer container 2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4. Jul 7 06:05:25.398632 systemd-networkd[1456]: calidad8590056d: Link UP Jul 7 06:05:25.398956 systemd-networkd[1456]: calidad8590056d: Gained carrier Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.173 [INFO][4322] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.189 [INFO][4322] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0 goldmane-768f4c5c69- calico-system 660f5dd9-b647-4594-92a6-e946605f47e3 817 0 2025-07-07 06:05:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-236-119-245 goldmane-768f4c5c69-9jrmp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidad8590056d [] [] }} ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.189 [INFO][4322] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.244 [INFO][4358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" HandleID="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Workload="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.244 [INFO][4358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" HandleID="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Workload="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-119-245", "pod":"goldmane-768f4c5c69-9jrmp", "timestamp":"2025-07-07 06:05:25.243596265 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.244 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.289 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.289 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.356 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.363 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.370 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.372 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.374 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.375 [INFO][4358] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.376 [INFO][4358] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629 Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.383 [INFO][4358] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.391 [INFO][4358] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.132/26] block=192.168.2.128/26 handle="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.391 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.132/26] handle="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" host="172-236-119-245" Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.391 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.415287 containerd[1544]: 2025-07-07 06:05:25.391 [INFO][4358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.132/26] IPv6=[] ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" HandleID="k8s-pod-network.44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Workload="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" Jul 7 06:05:25.416856 containerd[1544]: 2025-07-07 06:05:25.395 [INFO][4322] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"660f5dd9-b647-4594-92a6-e946605f47e3", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"goldmane-768f4c5c69-9jrmp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidad8590056d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.416856 containerd[1544]: 2025-07-07 06:05:25.395 [INFO][4322] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.132/32] ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" Jul 7 06:05:25.416856 containerd[1544]: 2025-07-07 06:05:25.395 [INFO][4322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidad8590056d ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" Jul 7 06:05:25.416856 containerd[1544]: 2025-07-07 06:05:25.398 [INFO][4322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" Jul 7 06:05:25.416856 containerd[1544]: 2025-07-07 06:05:25.398 [INFO][4322] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"660f5dd9-b647-4594-92a6-e946605f47e3", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629", Pod:"goldmane-768f4c5c69-9jrmp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidad8590056d", MAC:"ea:14:45:fc:46:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.416856 containerd[1544]: 2025-07-07 06:05:25.412 [INFO][4322] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" Namespace="calico-system" Pod="goldmane-768f4c5c69-9jrmp" WorkloadEndpoint="172--236--119--245-k8s-goldmane--768f4c5c69--9jrmp-eth0" Jul 7 06:05:25.447490 containerd[1544]: time="2025-07-07T06:05:25.447429111Z" level=info msg="connecting to shim 44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629" address="unix:///run/containerd/s/5c5a882cfa77d7995f916d5092294a683a518f6c06fbe4661b208c5deb3ce675" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:25.467950 containerd[1544]: time="2025-07-07T06:05:25.467877494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vzz9g,Uid:95a5639a-5c75-4697-b28e-c61f7dab6169,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4\"" Jul 7 06:05:25.468815 kubelet[2727]: E0707 06:05:25.468794 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:25.473577 containerd[1544]: time="2025-07-07T06:05:25.473555408Z" level=info msg="CreateContainer within sandbox \"2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:05:25.485926 systemd[1]: Started cri-containerd-44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629.scope - libcontainer container 44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629. Jul 7 06:05:25.492156 containerd[1544]: time="2025-07-07T06:05:25.492077126Z" level=info msg="Container 3a540d327499eda5fb92f64c4fc250c59a677a991b3ee0fe7a56fb9e62d10dcb: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:25.497639 containerd[1544]: time="2025-07-07T06:05:25.497607920Z" level=info msg="CreateContainer within sandbox \"2d239a6e55102f30e2031c6a6cbb13fc2d38f9ed725fec83ea901e4fc60a6dd4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a540d327499eda5fb92f64c4fc250c59a677a991b3ee0fe7a56fb9e62d10dcb\"" Jul 7 06:05:25.498632 containerd[1544]: time="2025-07-07T06:05:25.498593927Z" level=info msg="StartContainer for \"3a540d327499eda5fb92f64c4fc250c59a677a991b3ee0fe7a56fb9e62d10dcb\"" Jul 7 06:05:25.499268 containerd[1544]: time="2025-07-07T06:05:25.499219645Z" level=info msg="connecting to shim 3a540d327499eda5fb92f64c4fc250c59a677a991b3ee0fe7a56fb9e62d10dcb" address="unix:///run/containerd/s/54939a84b499e72a7829506f7fe1b294a20805b282d6de33a71597e367f01753" protocol=ttrpc version=3 Jul 7 06:05:25.527929 systemd[1]: Started cri-containerd-3a540d327499eda5fb92f64c4fc250c59a677a991b3ee0fe7a56fb9e62d10dcb.scope - libcontainer container 3a540d327499eda5fb92f64c4fc250c59a677a991b3ee0fe7a56fb9e62d10dcb. Jul 7 06:05:25.567191 containerd[1544]: time="2025-07-07T06:05:25.567152544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-9jrmp,Uid:660f5dd9-b647-4594-92a6-e946605f47e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629\"" Jul 7 06:05:25.568110 containerd[1544]: time="2025-07-07T06:05:25.568066402Z" level=info msg="StartContainer for \"3a540d327499eda5fb92f64c4fc250c59a677a991b3ee0fe7a56fb9e62d10dcb\" returns successfully" Jul 7 06:05:25.570215 containerd[1544]: time="2025-07-07T06:05:25.570135526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:05:26.116817 containerd[1544]: time="2025-07-07T06:05:26.116745535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c5d6d7d5-mwflw,Uid:5584af6e-842e-4fc9-84d1-f6b0c3bfe672,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:26.127725 containerd[1544]: time="2025-07-07T06:05:26.127254617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-596hn,Uid:42e67864-1946-4b93-ab86-d2296d74f6ca,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:05:26.127867 containerd[1544]: time="2025-07-07T06:05:26.127824206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjft,Uid:699001e3-2bb8-49d9-b0d8-60e5a17aecbb,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:26.287948 kubelet[2727]: E0707 06:05:26.287032 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:26.293659 kubelet[2727]: E0707 06:05:26.293585 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:26.336120 kubelet[2727]: I0707 06:05:26.336048 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vzz9g" podStartSLOduration=31.33602779 podStartE2EDuration="31.33602779s" podCreationTimestamp="2025-07-07 06:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:26.312219623 +0000 UTC m=+38.309344789" watchObservedRunningTime="2025-07-07 06:05:26.33602779 +0000 UTC m=+38.333152956" Jul 7 06:05:26.380861 systemd-networkd[1456]: cali06163548fe0: Link UP Jul 7 06:05:26.381104 systemd-networkd[1456]: cali06163548fe0: Gained carrier Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.200 [INFO][4529] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.221 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0 calico-apiserver-789d8b94bc- calico-apiserver 42e67864-1946-4b93-ab86-d2296d74f6ca 816 0 2025-07-07 06:05:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:789d8b94bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-119-245 calico-apiserver-789d8b94bc-596hn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06163548fe0 [] [] }} ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.221 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.303 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" HandleID="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Workload="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.303 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" HandleID="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Workload="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-119-245", "pod":"calico-apiserver-789d8b94bc-596hn", "timestamp":"2025-07-07 06:05:26.303096048 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.303 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.303 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.303 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.318 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.334 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.342 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.344 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.350 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.350 [INFO][4561] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.352 [INFO][4561] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2 Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.356 [INFO][4561] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.361 [INFO][4561] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.133/26] block=192.168.2.128/26 handle="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.361 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.133/26] handle="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" host="172-236-119-245" Jul 7 06:05:26.402458 containerd[1544]: 2025-07-07 06:05:26.361 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:26.402934 containerd[1544]: 2025-07-07 06:05:26.361 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.133/26] IPv6=[] ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" HandleID="k8s-pod-network.c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Workload="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" Jul 7 06:05:26.402934 containerd[1544]: 2025-07-07 06:05:26.371 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0", GenerateName:"calico-apiserver-789d8b94bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"42e67864-1946-4b93-ab86-d2296d74f6ca", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"789d8b94bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"calico-apiserver-789d8b94bc-596hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06163548fe0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:26.402934 containerd[1544]: 2025-07-07 06:05:26.371 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.133/32] ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" Jul 7 06:05:26.402934 containerd[1544]: 2025-07-07 06:05:26.371 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06163548fe0 ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" Jul 7 06:05:26.402934 containerd[1544]: 2025-07-07 06:05:26.382 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" Jul 7 06:05:26.402934 containerd[1544]: 2025-07-07 06:05:26.385 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0", GenerateName:"calico-apiserver-789d8b94bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"42e67864-1946-4b93-ab86-d2296d74f6ca", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"789d8b94bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2", Pod:"calico-apiserver-789d8b94bc-596hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06163548fe0", MAC:"22:57:59:c8:7f:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:26.406111 containerd[1544]: 2025-07-07 06:05:26.395 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-596hn" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--596hn-eth0" Jul 7 06:05:26.457068 containerd[1544]: time="2025-07-07T06:05:26.457009917Z" level=info msg="connecting to shim c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2" address="unix:///run/containerd/s/3933a2460a6361104520d79e6bdc5fe7f467f4cd68bd9e086e180a65065737f5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:26.516898 systemd[1]: Started cri-containerd-c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2.scope - libcontainer container c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2. Jul 7 06:05:26.532567 systemd-networkd[1456]: cali06bd388c8db: Link UP Jul 7 06:05:26.537563 systemd-networkd[1456]: cali06bd388c8db: Gained carrier Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.205 [INFO][4518] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.224 [INFO][4518] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0 calico-kube-controllers-75c5d6d7d5- calico-system 5584af6e-842e-4fc9-84d1-f6b0c3bfe672 819 0 2025-07-07 06:05:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75c5d6d7d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-119-245 calico-kube-controllers-75c5d6d7d5-mwflw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali06bd388c8db [] [] }} ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.224 [INFO][4518] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.306 [INFO][4559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" HandleID="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Workload="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.306 [INFO][4559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" HandleID="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Workload="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035db40), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-119-245", "pod":"calico-kube-controllers-75c5d6d7d5-mwflw", "timestamp":"2025-07-07 06:05:26.306056129 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.306 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.361 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.362 [INFO][4559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.417 [INFO][4559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.440 [INFO][4559] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.454 [INFO][4559] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.462 [INFO][4559] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.471 [INFO][4559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.471 [INFO][4559] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.482 [INFO][4559] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.504 [INFO][4559] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.513 [INFO][4559] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.134/26] block=192.168.2.128/26 handle="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.513 [INFO][4559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.134/26] handle="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" host="172-236-119-245" Jul 7 06:05:26.584313 containerd[1544]: 2025-07-07 06:05:26.514 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:26.585172 containerd[1544]: 2025-07-07 06:05:26.515 [INFO][4559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.134/26] IPv6=[] ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" HandleID="k8s-pod-network.bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Workload="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" Jul 7 06:05:26.585172 containerd[1544]: 2025-07-07 06:05:26.523 [INFO][4518] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0", GenerateName:"calico-kube-controllers-75c5d6d7d5-", Namespace:"calico-system", SelfLink:"", UID:"5584af6e-842e-4fc9-84d1-f6b0c3bfe672", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c5d6d7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"calico-kube-controllers-75c5d6d7d5-mwflw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06bd388c8db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:26.585172 containerd[1544]: 2025-07-07 06:05:26.523 [INFO][4518] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.134/32] ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" Jul 7 06:05:26.585172 containerd[1544]: 2025-07-07 06:05:26.523 [INFO][4518] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06bd388c8db ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" Jul 7 06:05:26.585172 containerd[1544]: 2025-07-07 06:05:26.553 [INFO][4518] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" Jul 7 06:05:26.585295 containerd[1544]: 2025-07-07 06:05:26.555 [INFO][4518] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0", GenerateName:"calico-kube-controllers-75c5d6d7d5-", Namespace:"calico-system", SelfLink:"", UID:"5584af6e-842e-4fc9-84d1-f6b0c3bfe672", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c5d6d7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc", Pod:"calico-kube-controllers-75c5d6d7d5-mwflw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06bd388c8db", MAC:"ce:84:7c:b7:43:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:26.585295 containerd[1544]: 2025-07-07 06:05:26.581 [INFO][4518] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" Namespace="calico-system" Pod="calico-kube-controllers-75c5d6d7d5-mwflw" WorkloadEndpoint="172--236--119--245-k8s-calico--kube--controllers--75c5d6d7d5--mwflw-eth0" Jul 7 06:05:26.604862 systemd-networkd[1456]: calidad8590056d: Gained IPv6LL Jul 7 06:05:26.639212 systemd-networkd[1456]: cali3e4123f0c6f: Link UP Jul 7 06:05:26.641890 systemd-networkd[1456]: cali3e4123f0c6f: Gained carrier Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.225 [INFO][4538] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.236 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-csi--node--driver--wxjft-eth0 csi-node-driver- calico-system 699001e3-2bb8-49d9-b0d8-60e5a17aecbb 723 0 2025-07-07 06:05:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-119-245 csi-node-driver-wxjft eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3e4123f0c6f [] [] }} ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.236 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.315 [INFO][4567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" HandleID="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Workload="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.316 [INFO][4567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" HandleID="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Workload="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-119-245", "pod":"csi-node-driver-wxjft", "timestamp":"2025-07-07 06:05:26.315050906 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.316 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.515 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.515 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.539 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.560 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.582 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.587 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.591 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.591 [INFO][4567] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.593 [INFO][4567] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032 Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.598 [INFO][4567] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.609 [INFO][4567] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.135/26] block=192.168.2.128/26 handle="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.611 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.135/26] handle="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" host="172-236-119-245" Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.611 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:26.675942 containerd[1544]: 2025-07-07 06:05:26.611 [INFO][4567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.135/26] IPv6=[] ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" HandleID="k8s-pod-network.f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Workload="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" Jul 7 06:05:26.677364 containerd[1544]: 2025-07-07 06:05:26.630 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-csi--node--driver--wxjft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"699001e3-2bb8-49d9-b0d8-60e5a17aecbb", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"csi-node-driver-wxjft", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e4123f0c6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:26.677364 containerd[1544]: 2025-07-07 06:05:26.630 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.135/32] ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" Jul 7 06:05:26.677364 containerd[1544]: 2025-07-07 06:05:26.630 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e4123f0c6f ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" Jul 7 06:05:26.677364 containerd[1544]: 2025-07-07 06:05:26.643 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" Jul 7 06:05:26.677364 containerd[1544]: 2025-07-07 06:05:26.644 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-csi--node--driver--wxjft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"699001e3-2bb8-49d9-b0d8-60e5a17aecbb", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032", Pod:"csi-node-driver-wxjft", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e4123f0c6f", MAC:"9a:a6:6d:c8:c0:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:26.677364 containerd[1544]: 2025-07-07 06:05:26.669 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" Namespace="calico-system" Pod="csi-node-driver-wxjft" WorkloadEndpoint="172--236--119--245-k8s-csi--node--driver--wxjft-eth0" Jul 7 06:05:26.680008 containerd[1544]: time="2025-07-07T06:05:26.679145194Z" level=info msg="connecting to shim bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc" address="unix:///run/containerd/s/56bc265963d154f036bad20fcf6951afcbc1dd2c32c9feebed3313fef35e7654" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:26.705887 containerd[1544]: time="2025-07-07T06:05:26.705848573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-596hn,Uid:42e67864-1946-4b93-ab86-d2296d74f6ca,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2\"" Jul 7 06:05:26.746104 systemd[1]: Started cri-containerd-bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc.scope - libcontainer container bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc. Jul 7 06:05:26.748633 containerd[1544]: time="2025-07-07T06:05:26.748603569Z" level=info msg="connecting to shim f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032" address="unix:///run/containerd/s/6bafcda32387eeb5c4bd00ffc9b44b7025a65699e2aec291b9f180a7b00489a5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:26.796044 systemd[1]: Started cri-containerd-f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032.scope - libcontainer container f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032. Jul 7 06:05:26.847251 containerd[1544]: time="2025-07-07T06:05:26.847204656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjft,Uid:699001e3-2bb8-49d9-b0d8-60e5a17aecbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032\"" Jul 7 06:05:26.872542 containerd[1544]: time="2025-07-07T06:05:26.872516408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c5d6d7d5-mwflw,Uid:5584af6e-842e-4fc9-84d1-f6b0c3bfe672,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc\"" Jul 7 06:05:27.114366 containerd[1544]: time="2025-07-07T06:05:27.114230928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-85w6d,Uid:e38fc636-275d-4334-8ad6-800c7cc4fa05,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:05:27.115842 systemd-networkd[1456]: cali7680bb52b96: Gained IPv6LL Jul 7 06:05:27.242360 systemd-networkd[1456]: caliee55dc8389b: Link UP Jul 7 06:05:27.244349 systemd-networkd[1456]: caliee55dc8389b: Gained carrier Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.150 [INFO][4761] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.164 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0 calico-apiserver-789d8b94bc- calico-apiserver e38fc636-275d-4334-8ad6-800c7cc4fa05 814 0 2025-07-07 06:05:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:789d8b94bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-119-245 calico-apiserver-789d8b94bc-85w6d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee55dc8389b [] [] }} ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.165 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.194 [INFO][4773] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" HandleID="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Workload="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.195 [INFO][4773] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" HandleID="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Workload="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-119-245", "pod":"calico-apiserver-789d8b94bc-85w6d", "timestamp":"2025-07-07 06:05:27.194792774 +0000 UTC"}, Hostname:"172-236-119-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.195 [INFO][4773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.195 [INFO][4773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.195 [INFO][4773] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-119-245' Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.205 [INFO][4773] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.211 [INFO][4773] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.216 [INFO][4773] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.217 [INFO][4773] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.220 [INFO][4773] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.221 [INFO][4773] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.223 [INFO][4773] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.227 [INFO][4773] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.234 [INFO][4773] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.136/26] block=192.168.2.128/26 handle="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.234 [INFO][4773] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.136/26] handle="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" host="172-236-119-245" Jul 7 06:05:27.259012 containerd[1544]: 2025-07-07 06:05:27.234 [INFO][4773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:27.263195 containerd[1544]: 2025-07-07 06:05:27.234 [INFO][4773] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.136/26] IPv6=[] ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" HandleID="k8s-pod-network.0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Workload="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" Jul 7 06:05:27.263195 containerd[1544]: 2025-07-07 06:05:27.238 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0", GenerateName:"calico-apiserver-789d8b94bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e38fc636-275d-4334-8ad6-800c7cc4fa05", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"789d8b94bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"", Pod:"calico-apiserver-789d8b94bc-85w6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee55dc8389b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:27.263195 containerd[1544]: 2025-07-07 06:05:27.238 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.136/32] ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" Jul 7 06:05:27.263195 containerd[1544]: 2025-07-07 06:05:27.238 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee55dc8389b ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" Jul 7 06:05:27.263195 containerd[1544]: 2025-07-07 06:05:27.242 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" Jul 7 06:05:27.263195 containerd[1544]: 2025-07-07 06:05:27.242 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0", GenerateName:"calico-apiserver-789d8b94bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e38fc636-275d-4334-8ad6-800c7cc4fa05", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"789d8b94bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-119-245", ContainerID:"0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc", Pod:"calico-apiserver-789d8b94bc-85w6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee55dc8389b", MAC:"26:81:8c:93:89:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:27.263390 containerd[1544]: 2025-07-07 06:05:27.252 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" Namespace="calico-apiserver" Pod="calico-apiserver-789d8b94bc-85w6d" WorkloadEndpoint="172--236--119--245-k8s-calico--apiserver--789d8b94bc--85w6d-eth0" Jul 7 06:05:27.284430 containerd[1544]: time="2025-07-07T06:05:27.284400987Z" level=info msg="connecting to shim 0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc" address="unix:///run/containerd/s/bb3c661ede02bdf2968ce3b2d91c08d53ccfb4ec1f982effb5ecdd91b8022b80" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:27.300927 kubelet[2727]: E0707 06:05:27.300908 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:27.320826 systemd[1]: Started cri-containerd-0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc.scope - libcontainer container 0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc. Jul 7 06:05:27.378948 containerd[1544]: time="2025-07-07T06:05:27.378820508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-789d8b94bc-85w6d,Uid:e38fc636-275d-4334-8ad6-800c7cc4fa05,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc\"" Jul 7 06:05:28.012897 systemd-networkd[1456]: cali06163548fe0: Gained IPv6LL Jul 7 06:05:28.137002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294108447.mount: Deactivated successfully. Jul 7 06:05:28.205297 systemd-networkd[1456]: cali06bd388c8db: Gained IPv6LL Jul 7 06:05:28.267884 systemd-networkd[1456]: cali3e4123f0c6f: Gained IPv6LL Jul 7 06:05:28.589317 containerd[1544]: time="2025-07-07T06:05:28.589187237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:28.590375 containerd[1544]: time="2025-07-07T06:05:28.590125345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:05:28.590817 containerd[1544]: time="2025-07-07T06:05:28.590772663Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:28.592975 containerd[1544]: time="2025-07-07T06:05:28.592952848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:28.593753 containerd[1544]: time="2025-07-07T06:05:28.593731456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.02356725s" Jul 7 06:05:28.593954 containerd[1544]: time="2025-07-07T06:05:28.593817976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:05:28.595794 containerd[1544]: time="2025-07-07T06:05:28.595765091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:05:28.600619 containerd[1544]: time="2025-07-07T06:05:28.600586870Z" level=info msg="CreateContainer within sandbox \"44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:05:28.614738 containerd[1544]: time="2025-07-07T06:05:28.614659976Z" level=info msg="Container 1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:28.626136 containerd[1544]: time="2025-07-07T06:05:28.626106188Z" level=info msg="CreateContainer within sandbox \"44b4132545076baba6589278f3d75da165e28d6f4a220c0288d35a3173637629\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\"" Jul 7 06:05:28.626634 containerd[1544]: time="2025-07-07T06:05:28.626611287Z" level=info msg="StartContainer for \"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\"" Jul 7 06:05:28.628865 containerd[1544]: time="2025-07-07T06:05:28.628842722Z" level=info msg="connecting to shim 1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea" address="unix:///run/containerd/s/5c5a882cfa77d7995f916d5092294a683a518f6c06fbe4661b208c5deb3ce675" protocol=ttrpc version=3 Jul 7 06:05:28.658822 systemd[1]: Started cri-containerd-1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea.scope - libcontainer container 1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea. Jul 7 06:05:28.714381 containerd[1544]: time="2025-07-07T06:05:28.714198716Z" level=info msg="StartContainer for \"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" returns successfully" Jul 7 06:05:29.291859 systemd-networkd[1456]: caliee55dc8389b: Gained IPv6LL Jul 7 06:05:30.188608 containerd[1544]: time="2025-07-07T06:05:30.188550091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:30.189654 containerd[1544]: time="2025-07-07T06:05:30.189438779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:05:30.190230 containerd[1544]: time="2025-07-07T06:05:30.190201798Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:30.191639 containerd[1544]: time="2025-07-07T06:05:30.191616874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:30.192232 containerd[1544]: time="2025-07-07T06:05:30.192193593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.596295952s" Jul 7 06:05:30.192276 containerd[1544]: time="2025-07-07T06:05:30.192231463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:05:30.194949 containerd[1544]: time="2025-07-07T06:05:30.194918058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:05:30.197858 containerd[1544]: time="2025-07-07T06:05:30.197825661Z" level=info msg="CreateContainer within sandbox \"c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:05:30.205282 containerd[1544]: time="2025-07-07T06:05:30.204675036Z" level=info msg="Container 1abde460512c49271c7bfd5ddea59d1f480be2398ee7dbd0a5aba469cdf67213: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:30.212893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670621182.mount: Deactivated successfully. Jul 7 06:05:30.237258 containerd[1544]: time="2025-07-07T06:05:30.237226645Z" level=info msg="CreateContainer within sandbox \"c79162cc1f5510d9e26b7bb6db8060d4a840a89e3465296e12229208ab570db2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1abde460512c49271c7bfd5ddea59d1f480be2398ee7dbd0a5aba469cdf67213\"" Jul 7 06:05:30.237898 containerd[1544]: time="2025-07-07T06:05:30.237867734Z" level=info msg="StartContainer for \"1abde460512c49271c7bfd5ddea59d1f480be2398ee7dbd0a5aba469cdf67213\"" Jul 7 06:05:30.239131 containerd[1544]: time="2025-07-07T06:05:30.239070022Z" level=info msg="connecting to shim 1abde460512c49271c7bfd5ddea59d1f480be2398ee7dbd0a5aba469cdf67213" address="unix:///run/containerd/s/3933a2460a6361104520d79e6bdc5fe7f467f4cd68bd9e086e180a65065737f5" protocol=ttrpc version=3 Jul 7 06:05:30.263963 systemd[1]: Started cri-containerd-1abde460512c49271c7bfd5ddea59d1f480be2398ee7dbd0a5aba469cdf67213.scope - libcontainer container 1abde460512c49271c7bfd5ddea59d1f480be2398ee7dbd0a5aba469cdf67213. Jul 7 06:05:30.322789 kubelet[2727]: I0707 06:05:30.322382 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:30.337105 containerd[1544]: time="2025-07-07T06:05:30.337062059Z" level=info msg="StartContainer for \"1abde460512c49271c7bfd5ddea59d1f480be2398ee7dbd0a5aba469cdf67213\" returns successfully" Jul 7 06:05:31.073918 containerd[1544]: time="2025-07-07T06:05:31.073612663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:31.075631 containerd[1544]: time="2025-07-07T06:05:31.075295790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:05:31.076774 containerd[1544]: time="2025-07-07T06:05:31.076754067Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:31.078023 containerd[1544]: time="2025-07-07T06:05:31.078002834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:31.079822 containerd[1544]: time="2025-07-07T06:05:31.079799891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 884.850173ms" Jul 7 06:05:31.079925 containerd[1544]: time="2025-07-07T06:05:31.079905930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:05:31.081891 containerd[1544]: time="2025-07-07T06:05:31.081834716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:05:31.086717 containerd[1544]: time="2025-07-07T06:05:31.086494087Z" level=info msg="CreateContainer within sandbox \"f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:05:31.097878 containerd[1544]: time="2025-07-07T06:05:31.097856263Z" level=info msg="Container f4fe9f9717d3761164fd7b0ae13d82ec84121e5e95b12821f0a1a1563855d93a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:31.107178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105693199.mount: Deactivated successfully. Jul 7 06:05:31.110622 containerd[1544]: time="2025-07-07T06:05:31.110575027Z" level=info msg="CreateContainer within sandbox \"f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f4fe9f9717d3761164fd7b0ae13d82ec84121e5e95b12821f0a1a1563855d93a\"" Jul 7 06:05:31.111737 containerd[1544]: time="2025-07-07T06:05:31.111330835Z" level=info msg="StartContainer for \"f4fe9f9717d3761164fd7b0ae13d82ec84121e5e95b12821f0a1a1563855d93a\"" Jul 7 06:05:31.113800 containerd[1544]: time="2025-07-07T06:05:31.113349871Z" level=info msg="connecting to shim f4fe9f9717d3761164fd7b0ae13d82ec84121e5e95b12821f0a1a1563855d93a" address="unix:///run/containerd/s/6bafcda32387eeb5c4bd00ffc9b44b7025a65699e2aec291b9f180a7b00489a5" protocol=ttrpc version=3 Jul 7 06:05:31.141465 systemd[1]: Started cri-containerd-f4fe9f9717d3761164fd7b0ae13d82ec84121e5e95b12821f0a1a1563855d93a.scope - libcontainer container f4fe9f9717d3761164fd7b0ae13d82ec84121e5e95b12821f0a1a1563855d93a. Jul 7 06:05:31.225892 containerd[1544]: time="2025-07-07T06:05:31.225847148Z" level=info msg="StartContainer for \"f4fe9f9717d3761164fd7b0ae13d82ec84121e5e95b12821f0a1a1563855d93a\" returns successfully" Jul 7 06:05:31.344056 kubelet[2727]: I0707 06:05:31.343897 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-9jrmp" podStartSLOduration=23.318914887 podStartE2EDuration="26.343881964s" podCreationTimestamp="2025-07-07 06:05:05 +0000 UTC" firstStartedPulling="2025-07-07 06:05:25.569839537 +0000 UTC m=+37.566964693" lastFinishedPulling="2025-07-07 06:05:28.594806614 +0000 UTC m=+40.591931770" observedRunningTime="2025-07-07 06:05:29.326885991 +0000 UTC m=+41.324011147" watchObservedRunningTime="2025-07-07 06:05:31.343881964 +0000 UTC m=+43.341007120" Jul 7 06:05:31.632670 kubelet[2727]: I0707 06:05:31.632524 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:31.632906 kubelet[2727]: E0707 06:05:31.632859 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:31.644936 kubelet[2727]: I0707 06:05:31.644869 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-789d8b94bc-596hn" podStartSLOduration=25.160233525 podStartE2EDuration="28.64485804s" podCreationTimestamp="2025-07-07 06:05:03 +0000 UTC" firstStartedPulling="2025-07-07 06:05:26.708886785 +0000 UTC m=+38.706011941" lastFinishedPulling="2025-07-07 06:05:30.1935113 +0000 UTC m=+42.190636456" observedRunningTime="2025-07-07 06:05:31.342873945 +0000 UTC m=+43.339999101" watchObservedRunningTime="2025-07-07 06:05:31.64485804 +0000 UTC m=+43.641983196" Jul 7 06:05:32.339674 kubelet[2727]: E0707 06:05:32.339294 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:05:32.340925 kubelet[2727]: I0707 06:05:32.340899 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:32.344012 systemd-networkd[1456]: vxlan.calico: Link UP Jul 7 06:05:32.344027 systemd-networkd[1456]: vxlan.calico: Gained carrier Jul 7 06:05:33.353982 containerd[1544]: time="2025-07-07T06:05:33.353933237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:33.354939 containerd[1544]: time="2025-07-07T06:05:33.354801845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:05:33.355627 containerd[1544]: time="2025-07-07T06:05:33.355597544Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:33.357255 containerd[1544]: time="2025-07-07T06:05:33.357223701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:33.357806 containerd[1544]: time="2025-07-07T06:05:33.357778790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.275806394s" Jul 7 06:05:33.357883 containerd[1544]: time="2025-07-07T06:05:33.357867770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:05:33.358838 containerd[1544]: time="2025-07-07T06:05:33.358815508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:05:33.378806 containerd[1544]: time="2025-07-07T06:05:33.377744572Z" level=info msg="CreateContainer within sandbox \"bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:05:33.383850 containerd[1544]: time="2025-07-07T06:05:33.383830051Z" level=info msg="Container af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:33.392182 containerd[1544]: time="2025-07-07T06:05:33.392147616Z" level=info msg="CreateContainer within sandbox \"bc680e156ad8ef72e4774c364dbc2dc3b156e36b92ca4ba32153d78c9ed78fdc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\"" Jul 7 06:05:33.393156 containerd[1544]: time="2025-07-07T06:05:33.393129283Z" level=info msg="StartContainer for \"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\"" Jul 7 06:05:33.394855 containerd[1544]: time="2025-07-07T06:05:33.394792000Z" level=info msg="connecting to shim af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778" address="unix:///run/containerd/s/56bc265963d154f036bad20fcf6951afcbc1dd2c32c9feebed3313fef35e7654" protocol=ttrpc version=3 Jul 7 06:05:33.426850 systemd[1]: Started cri-containerd-af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778.scope - libcontainer container af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778. Jul 7 06:05:33.491652 containerd[1544]: time="2025-07-07T06:05:33.491611848Z" level=info msg="StartContainer for \"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" returns successfully" Jul 7 06:05:33.586351 containerd[1544]: time="2025-07-07T06:05:33.586292350Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:33.587065 containerd[1544]: time="2025-07-07T06:05:33.587031878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:05:33.589191 containerd[1544]: time="2025-07-07T06:05:33.589113475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 230.271127ms" Jul 7 06:05:33.589191 containerd[1544]: time="2025-07-07T06:05:33.589138725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:05:33.591057 containerd[1544]: time="2025-07-07T06:05:33.590884371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:05:33.595751 containerd[1544]: time="2025-07-07T06:05:33.595723053Z" level=info msg="CreateContainer within sandbox \"0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:05:33.602879 containerd[1544]: time="2025-07-07T06:05:33.602844339Z" level=info msg="Container 040fcf6606adf231c517f3111dd08e31e0c7fb4000e6d7c5b476c782fcdd125d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:33.618155 containerd[1544]: time="2025-07-07T06:05:33.618078261Z" level=info msg="CreateContainer within sandbox \"0b5e7057c5e7ee54dfaeef892750aaea776daf04631221c84a6d7c5f4e7625fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"040fcf6606adf231c517f3111dd08e31e0c7fb4000e6d7c5b476c782fcdd125d\"" Jul 7 06:05:33.620407 containerd[1544]: time="2025-07-07T06:05:33.619630338Z" level=info msg="StartContainer for \"040fcf6606adf231c517f3111dd08e31e0c7fb4000e6d7c5b476c782fcdd125d\"" Jul 7 06:05:33.620692 containerd[1544]: time="2025-07-07T06:05:33.620673755Z" level=info msg="connecting to shim 040fcf6606adf231c517f3111dd08e31e0c7fb4000e6d7c5b476c782fcdd125d" address="unix:///run/containerd/s/bb3c661ede02bdf2968ce3b2d91c08d53ccfb4ec1f982effb5ecdd91b8022b80" protocol=ttrpc version=3 Jul 7 06:05:33.641836 systemd[1]: Started cri-containerd-040fcf6606adf231c517f3111dd08e31e0c7fb4000e6d7c5b476c782fcdd125d.scope - libcontainer container 040fcf6606adf231c517f3111dd08e31e0c7fb4000e6d7c5b476c782fcdd125d. Jul 7 06:05:33.701959 containerd[1544]: time="2025-07-07T06:05:33.701900003Z" level=info msg="StartContainer for \"040fcf6606adf231c517f3111dd08e31e0c7fb4000e6d7c5b476c782fcdd125d\" returns successfully" Jul 7 06:05:34.283851 systemd-networkd[1456]: vxlan.calico: Gained IPv6LL Jul 7 06:05:34.384733 kubelet[2727]: I0707 06:05:34.384636 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75c5d6d7d5-mwflw" podStartSLOduration=21.900028089 podStartE2EDuration="28.384624382s" podCreationTimestamp="2025-07-07 06:05:06 +0000 UTC" firstStartedPulling="2025-07-07 06:05:26.873821645 +0000 UTC m=+38.870946801" lastFinishedPulling="2025-07-07 06:05:33.358417928 +0000 UTC m=+45.355543094" observedRunningTime="2025-07-07 06:05:34.383199465 +0000 UTC m=+46.380324641" watchObservedRunningTime="2025-07-07 06:05:34.384624382 +0000 UTC m=+46.381749548" Jul 7 06:05:34.399286 kubelet[2727]: I0707 06:05:34.399243 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-789d8b94bc-85w6d" podStartSLOduration=25.189115608 podStartE2EDuration="31.399233366s" podCreationTimestamp="2025-07-07 06:05:03 +0000 UTC" firstStartedPulling="2025-07-07 06:05:27.380322034 +0000 UTC m=+39.377447190" lastFinishedPulling="2025-07-07 06:05:33.590439782 +0000 UTC m=+45.587564948" observedRunningTime="2025-07-07 06:05:34.398985746 +0000 UTC m=+46.396110902" watchObservedRunningTime="2025-07-07 06:05:34.399233366 +0000 UTC m=+46.396358522" Jul 7 06:05:34.448150 containerd[1544]: time="2025-07-07T06:05:34.447586419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"837e4e4cba9d6a166cc94cc30d23928b575001d54f8c0b516a7537b4fc14ef17\" pid:5284 exited_at:{seconds:1751868334 nanos:446975140}" Jul 7 06:05:34.824895 containerd[1544]: time="2025-07-07T06:05:34.824831262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:34.826022 containerd[1544]: time="2025-07-07T06:05:34.825991000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:05:34.827784 containerd[1544]: time="2025-07-07T06:05:34.827752247Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:34.833884 containerd[1544]: time="2025-07-07T06:05:34.833828907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:34.835263 containerd[1544]: time="2025-07-07T06:05:34.835212154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.244303623s" Jul 7 06:05:34.835263 containerd[1544]: time="2025-07-07T06:05:34.835239154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:05:34.839643 containerd[1544]: time="2025-07-07T06:05:34.839580656Z" level=info msg="CreateContainer within sandbox \"f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:05:34.845841 containerd[1544]: time="2025-07-07T06:05:34.845800705Z" level=info msg="Container 2bd7ff70c030a39d650b9e91813f16537115201bb9be73aee5d758fb8af1ca27: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:34.858998 containerd[1544]: time="2025-07-07T06:05:34.858967811Z" level=info msg="CreateContainer within sandbox \"f20cfbac4f28c230788725403aefa11230ae71e719312218832c1673815c9032\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2bd7ff70c030a39d650b9e91813f16537115201bb9be73aee5d758fb8af1ca27\"" Jul 7 06:05:34.859921 containerd[1544]: time="2025-07-07T06:05:34.859663850Z" level=info msg="StartContainer for \"2bd7ff70c030a39d650b9e91813f16537115201bb9be73aee5d758fb8af1ca27\"" Jul 7 06:05:34.861646 containerd[1544]: time="2025-07-07T06:05:34.861594667Z" level=info msg="connecting to shim 2bd7ff70c030a39d650b9e91813f16537115201bb9be73aee5d758fb8af1ca27" address="unix:///run/containerd/s/6bafcda32387eeb5c4bd00ffc9b44b7025a65699e2aec291b9f180a7b00489a5" protocol=ttrpc version=3 Jul 7 06:05:34.888847 systemd[1]: Started cri-containerd-2bd7ff70c030a39d650b9e91813f16537115201bb9be73aee5d758fb8af1ca27.scope - libcontainer container 2bd7ff70c030a39d650b9e91813f16537115201bb9be73aee5d758fb8af1ca27. Jul 7 06:05:34.941062 containerd[1544]: time="2025-07-07T06:05:34.940924264Z" level=info msg="StartContainer for \"2bd7ff70c030a39d650b9e91813f16537115201bb9be73aee5d758fb8af1ca27\" returns successfully" Jul 7 06:05:35.169281 kubelet[2727]: I0707 06:05:35.169080 2727 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:05:35.171193 kubelet[2727]: I0707 06:05:35.171156 2727 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:05:35.369416 kubelet[2727]: I0707 06:05:35.369384 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:35.934012 kubelet[2727]: I0707 06:05:35.933915 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wxjft" podStartSLOduration=21.946075657 podStartE2EDuration="29.933891957s" podCreationTimestamp="2025-07-07 06:05:06 +0000 UTC" firstStartedPulling="2025-07-07 06:05:26.849048241 +0000 UTC m=+38.846173407" lastFinishedPulling="2025-07-07 06:05:34.836864551 +0000 UTC m=+46.833989707" observedRunningTime="2025-07-07 06:05:35.382919811 +0000 UTC m=+47.380044967" watchObservedRunningTime="2025-07-07 06:05:35.933891957 +0000 UTC m=+47.931017113" Jul 7 06:05:41.286729 kubelet[2727]: I0707 06:05:41.286548 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:41.371977 containerd[1544]: time="2025-07-07T06:05:41.371918851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"05bfe5a0ab9a67a97c1657ae90294ad9158e7d067eb8a3b64c9bf231db6935b4\" pid:5355 exited_at:{seconds:1751868341 nanos:371311711}" Jul 7 06:05:41.548909 containerd[1544]: time="2025-07-07T06:05:41.548767606Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"e813a5334b91aae9328f76b8f2efa4011adb3cc1206ff02840f4d750c1c162b5\" pid:5380 exited_at:{seconds:1751868341 nanos:547977378}" Jul 7 06:05:51.113980 containerd[1544]: time="2025-07-07T06:05:51.113786430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" id:\"0743b4b7a2d500d31524e65155b7bd6b348b5a66cdaac716ee17b240ad04eafd\" pid:5416 exited_at:{seconds:1751868351 nanos:113470521}" Jul 7 06:06:03.222560 containerd[1544]: time="2025-07-07T06:06:03.222301302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"5942f30ea14a1a71ba3635a6b8bb666fb80d2685f48354b980768b2146d8c4ff\" pid:5453 exited_at:{seconds:1751868363 nanos:221951379}" Jul 7 06:06:04.405107 containerd[1544]: time="2025-07-07T06:06:04.405055224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"52d1ccb2217102dafad13da45f98ae41f2ee4b9ed895282986e1a91c51488166\" pid:5477 exited_at:{seconds:1751868364 nanos:403617865}" Jul 7 06:06:07.114960 kubelet[2727]: E0707 06:06:07.114878 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:06:10.973735 kubelet[2727]: I0707 06:06:10.972930 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:06:11.559949 containerd[1544]: time="2025-07-07T06:06:11.559889026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"af03f83e2fce776f980555a9bddb86e1be6eba0886288376349706df78aa1629\" pid:5501 exited_at:{seconds:1751868371 nanos:559336665}" Jul 7 06:06:12.115174 kubelet[2727]: E0707 06:06:12.114197 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:06:14.115625 kubelet[2727]: E0707 06:06:14.115583 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:06:21.114112 kubelet[2727]: E0707 06:06:21.114074 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:06:21.144798 containerd[1544]: time="2025-07-07T06:06:21.144757678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" id:\"e5d2dd1d6a5eb9a2d5d65b13e38227f80df6cb6b49ff9da116ad231ad36b8733\" pid:5529 exited_at:{seconds:1751868381 nanos:144236506}" Jul 7 06:06:24.334505 containerd[1544]: time="2025-07-07T06:06:24.334395709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"54087cba35e2e30507a19e67b5f73e4900b70028cb57e4fb9f6bc7d3b7c299f9\" pid:5552 exited_at:{seconds:1751868384 nanos:334147302}" Jul 7 06:06:34.412657 containerd[1544]: time="2025-07-07T06:06:34.412613688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"3d377d311e8352f3bb85eb0de08f90930e02bacee0135b84e47b4ef110f37f59\" pid:5576 exited_at:{seconds:1751868394 nanos:412216941}" Jul 7 06:06:36.114961 kubelet[2727]: E0707 06:06:36.114904 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:06:37.114887 kubelet[2727]: E0707 06:06:37.114752 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:06:37.114887 kubelet[2727]: E0707 06:06:37.114752 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:06:41.463806 containerd[1544]: time="2025-07-07T06:06:41.463715587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"ac34f446506ba0e0a230d8b14949a0ba1dc1dbf04e0213b19e9801b470d45804\" pid:5598 exited_at:{seconds:1751868401 nanos:463391659}" Jul 7 06:06:48.339427 update_engine[1530]: I20250707 06:06:48.339358 1530 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 06:06:48.339427 update_engine[1530]: I20250707 06:06:48.339411 1530 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 06:06:48.339952 update_engine[1530]: I20250707 06:06:48.339641 1530 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 06:06:48.340655 update_engine[1530]: I20250707 06:06:48.340627 1530 omaha_request_params.cc:62] Current group set to alpha Jul 7 06:06:48.341100 update_engine[1530]: I20250707 06:06:48.340773 1530 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 06:06:48.341100 update_engine[1530]: I20250707 06:06:48.340788 1530 update_attempter.cc:643] Scheduling an action processor start. Jul 7 06:06:48.341100 update_engine[1530]: I20250707 06:06:48.340806 1530 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:06:48.343868 locksmithd[1561]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 06:06:48.346059 update_engine[1530]: I20250707 06:06:48.345743 1530 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 06:06:48.346059 update_engine[1530]: I20250707 06:06:48.345816 1530 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:06:48.346059 update_engine[1530]: I20250707 06:06:48.345825 1530 omaha_request_action.cc:272] Request: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: Jul 7 06:06:48.346059 update_engine[1530]: I20250707 06:06:48.345832 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:06:48.350519 update_engine[1530]: I20250707 06:06:48.350432 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:06:48.350966 update_engine[1530]: I20250707 06:06:48.350893 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:06:48.353444 update_engine[1530]: E20250707 06:06:48.353382 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:06:48.353444 update_engine[1530]: I20250707 06:06:48.353456 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 06:06:51.122148 containerd[1544]: time="2025-07-07T06:06:51.122102215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" id:\"0e4855378341b59dafd1016562fafb4b95b3a0f65a37c43ed9918414009e9f86\" pid:5622 exited_at:{seconds:1751868411 nanos:121755087}" Jul 7 06:06:58.279867 update_engine[1530]: I20250707 06:06:58.279770 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:06:58.280392 update_engine[1530]: I20250707 06:06:58.280089 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:06:58.280392 update_engine[1530]: I20250707 06:06:58.280337 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:06:58.281127 update_engine[1530]: E20250707 06:06:58.281098 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:06:58.281166 update_engine[1530]: I20250707 06:06:58.281147 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 06:07:03.075329 containerd[1544]: time="2025-07-07T06:07:03.075286448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"245aca187d16782bea00dcdc4f749eb346de44fd147fe502e3102c874fe687fb\" pid:5654 exited_at:{seconds:1751868423 nanos:75004969}" Jul 7 06:07:04.397325 containerd[1544]: time="2025-07-07T06:07:04.397279495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"15bca1b0a66119396243daf97d89243c19439929266e2d839b5c6c74fcf9f8e0\" pid:5677 exited_at:{seconds:1751868424 nanos:397102886}" Jul 7 06:07:08.279814 update_engine[1530]: I20250707 06:07:08.279744 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:07:08.280216 update_engine[1530]: I20250707 06:07:08.280030 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:07:08.280309 update_engine[1530]: I20250707 06:07:08.280278 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:07:08.281287 update_engine[1530]: E20250707 06:07:08.281210 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:07:08.281366 update_engine[1530]: I20250707 06:07:08.281330 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 06:07:11.114198 kubelet[2727]: E0707 06:07:11.114166 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:07:11.452081 containerd[1544]: time="2025-07-07T06:07:11.451966247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"d6108e95f5a8aa56dfb8c608d74d772ca5cf042d3c7f9327bc7c10d968a7518e\" pid:5720 exited_at:{seconds:1751868431 nanos:451679199}" Jul 7 06:07:18.280959 update_engine[1530]: I20250707 06:07:18.280630 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:07:18.281764 update_engine[1530]: I20250707 06:07:18.281613 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:07:18.282196 update_engine[1530]: I20250707 06:07:18.282155 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:07:18.283151 update_engine[1530]: E20250707 06:07:18.283102 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:07:18.283151 update_engine[1530]: I20250707 06:07:18.283159 1530 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:07:18.283265 update_engine[1530]: I20250707 06:07:18.283169 1530 omaha_request_action.cc:617] Omaha request response: Jul 7 06:07:18.283825 update_engine[1530]: E20250707 06:07:18.283795 1530 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 06:07:18.283885 update_engine[1530]: I20250707 06:07:18.283834 1530 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 06:07:18.283885 update_engine[1530]: I20250707 06:07:18.283842 1530 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:07:18.283885 update_engine[1530]: I20250707 06:07:18.283849 1530 update_attempter.cc:306] Processing Done. Jul 7 06:07:18.283885 update_engine[1530]: E20250707 06:07:18.283862 1530 update_attempter.cc:619] Update failed. Jul 7 06:07:18.283885 update_engine[1530]: I20250707 06:07:18.283869 1530 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 06:07:18.283885 update_engine[1530]: I20250707 06:07:18.283874 1530 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 06:07:18.283885 update_engine[1530]: I20250707 06:07:18.283880 1530 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 06:07:18.284102 update_engine[1530]: I20250707 06:07:18.283947 1530 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:07:18.284102 update_engine[1530]: I20250707 06:07:18.283967 1530 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:07:18.284102 update_engine[1530]: I20250707 06:07:18.283972 1530 omaha_request_action.cc:272] Request: Jul 7 06:07:18.284102 update_engine[1530]: Jul 7 06:07:18.284102 update_engine[1530]: Jul 7 06:07:18.284102 update_engine[1530]: Jul 7 06:07:18.284102 update_engine[1530]: Jul 7 06:07:18.284102 update_engine[1530]: Jul 7 06:07:18.284102 update_engine[1530]: Jul 7 06:07:18.284102 update_engine[1530]: I20250707 06:07:18.283979 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:07:18.284391 update_engine[1530]: I20250707 06:07:18.284119 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:07:18.284391 update_engine[1530]: I20250707 06:07:18.284317 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:07:18.284764 locksmithd[1561]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 06:07:18.285296 update_engine[1530]: E20250707 06:07:18.285210 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:07:18.285377 update_engine[1530]: I20250707 06:07:18.285342 1530 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:07:18.285377 update_engine[1530]: I20250707 06:07:18.285367 1530 omaha_request_action.cc:617] Omaha request response: Jul 7 06:07:18.285377 update_engine[1530]: I20250707 06:07:18.285377 1530 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:07:18.285517 update_engine[1530]: I20250707 06:07:18.285384 1530 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:07:18.285517 update_engine[1530]: I20250707 06:07:18.285392 1530 update_attempter.cc:306] Processing Done. Jul 7 06:07:18.285517 update_engine[1530]: I20250707 06:07:18.285401 1530 update_attempter.cc:310] Error event sent. Jul 7 06:07:18.285517 update_engine[1530]: I20250707 06:07:18.285415 1530 update_check_scheduler.cc:74] Next update check in 41m40s Jul 7 06:07:18.285947 locksmithd[1561]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 06:07:21.130879 containerd[1544]: time="2025-07-07T06:07:21.130826449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" id:\"7095126f44e3f2e4df873217cf111f94023c3d18edec00133eb04309b92a0141\" pid:5742 exited_at:{seconds:1751868441 nanos:130323011}" Jul 7 06:07:24.332629 containerd[1544]: time="2025-07-07T06:07:24.332570684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"4acc93112fa651cb0bb31d7caed6ddb0c7a120c76dad32ddf2ae36b76c4d7a0b\" pid:5768 exited_at:{seconds:1751868444 nanos:332190316}" Jul 7 06:07:26.503003 systemd[1]: Started sshd@7-172.236.119.245:22-147.75.109.163:56828.service - OpenSSH per-connection server daemon (147.75.109.163:56828). Jul 7 06:07:26.871656 sshd[5784]: Accepted publickey for core from 147.75.109.163 port 56828 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:26.873838 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:26.880980 systemd-logind[1528]: New session 8 of user core. Jul 7 06:07:26.886843 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:07:27.114769 kubelet[2727]: E0707 06:07:27.114660 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:07:27.114769 kubelet[2727]: E0707 06:07:27.114688 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:07:27.200875 sshd[5786]: Connection closed by 147.75.109.163 port 56828 Jul 7 06:07:27.201418 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:27.205303 systemd[1]: sshd@7-172.236.119.245:22-147.75.109.163:56828.service: Deactivated successfully. Jul 7 06:07:27.207486 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:07:27.209186 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:07:27.210885 systemd-logind[1528]: Removed session 8. Jul 7 06:07:32.115271 kubelet[2727]: E0707 06:07:32.115166 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:07:32.270117 systemd[1]: Started sshd@8-172.236.119.245:22-147.75.109.163:56830.service - OpenSSH per-connection server daemon (147.75.109.163:56830). Jul 7 06:07:32.628189 sshd[5802]: Accepted publickey for core from 147.75.109.163 port 56830 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:32.629289 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:32.634390 systemd-logind[1528]: New session 9 of user core. Jul 7 06:07:32.641816 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:07:32.971722 sshd[5804]: Connection closed by 147.75.109.163 port 56830 Jul 7 06:07:32.972819 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:32.980225 systemd[1]: sshd@8-172.236.119.245:22-147.75.109.163:56830.service: Deactivated successfully. Jul 7 06:07:32.980366 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:07:32.983995 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:07:32.986990 systemd-logind[1528]: Removed session 9. Jul 7 06:07:34.406748 containerd[1544]: time="2025-07-07T06:07:34.406499409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"2c01c6d19064fdfe777a5b4a85fa46e2b290ae9b37bfc804c2cd9a75afd4fdd1\" pid:5828 exited_at:{seconds:1751868454 nanos:405688873}" Jul 7 06:07:38.034367 systemd[1]: Started sshd@9-172.236.119.245:22-147.75.109.163:33370.service - OpenSSH per-connection server daemon (147.75.109.163:33370). Jul 7 06:07:38.385764 sshd[5842]: Accepted publickey for core from 147.75.109.163 port 33370 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:38.387295 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:38.392912 systemd-logind[1528]: New session 10 of user core. Jul 7 06:07:38.401863 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:07:38.694883 sshd[5844]: Connection closed by 147.75.109.163 port 33370 Jul 7 06:07:38.695662 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:38.699922 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:07:38.700447 systemd[1]: sshd@9-172.236.119.245:22-147.75.109.163:33370.service: Deactivated successfully. Jul 7 06:07:38.702684 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:07:38.705425 systemd-logind[1528]: Removed session 10. Jul 7 06:07:38.756077 systemd[1]: Started sshd@10-172.236.119.245:22-147.75.109.163:33386.service - OpenSSH per-connection server daemon (147.75.109.163:33386). Jul 7 06:07:39.100027 sshd[5857]: Accepted publickey for core from 147.75.109.163 port 33386 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:39.101335 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:39.107903 systemd-logind[1528]: New session 11 of user core. Jul 7 06:07:39.113865 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:07:39.449638 sshd[5859]: Connection closed by 147.75.109.163 port 33386 Jul 7 06:07:39.452529 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:39.457320 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:07:39.457513 systemd[1]: sshd@10-172.236.119.245:22-147.75.109.163:33386.service: Deactivated successfully. Jul 7 06:07:39.460458 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:07:39.462804 systemd-logind[1528]: Removed session 11. Jul 7 06:07:39.515952 systemd[1]: Started sshd@11-172.236.119.245:22-147.75.109.163:33394.service - OpenSSH per-connection server daemon (147.75.109.163:33394). Jul 7 06:07:39.869098 sshd[5868]: Accepted publickey for core from 147.75.109.163 port 33394 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:39.870775 sshd-session[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:39.877099 systemd-logind[1528]: New session 12 of user core. Jul 7 06:07:39.888851 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:07:40.176816 sshd[5870]: Connection closed by 147.75.109.163 port 33394 Jul 7 06:07:40.177412 sshd-session[5868]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:40.181792 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:07:40.182228 systemd[1]: sshd@11-172.236.119.245:22-147.75.109.163:33394.service: Deactivated successfully. Jul 7 06:07:40.185052 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:07:40.187347 systemd-logind[1528]: Removed session 12. Jul 7 06:07:41.452904 containerd[1544]: time="2025-07-07T06:07:41.452862833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"b0a39ed22b090fa6aac54a96d4537fc363635ca58c82323840e5486d84d621fa\" pid:5894 exited_at:{seconds:1751868461 nanos:452524284}" Jul 7 06:07:45.247841 systemd[1]: Started sshd@12-172.236.119.245:22-147.75.109.163:33406.service - OpenSSH per-connection server daemon (147.75.109.163:33406). Jul 7 06:07:45.596804 sshd[5905]: Accepted publickey for core from 147.75.109.163 port 33406 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:45.597680 sshd-session[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:45.604285 systemd-logind[1528]: New session 13 of user core. Jul 7 06:07:45.609851 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:07:45.906911 sshd[5907]: Connection closed by 147.75.109.163 port 33406 Jul 7 06:07:45.907519 sshd-session[5905]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:45.912862 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:07:45.913684 systemd[1]: sshd@12-172.236.119.245:22-147.75.109.163:33406.service: Deactivated successfully. Jul 7 06:07:45.916289 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:07:45.919128 systemd-logind[1528]: Removed session 13. Jul 7 06:07:45.970291 systemd[1]: Started sshd@13-172.236.119.245:22-147.75.109.163:33414.service - OpenSSH per-connection server daemon (147.75.109.163:33414). Jul 7 06:07:46.321816 sshd[5919]: Accepted publickey for core from 147.75.109.163 port 33414 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:46.323116 sshd-session[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:46.328270 systemd-logind[1528]: New session 14 of user core. Jul 7 06:07:46.335860 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:07:46.774741 sshd[5921]: Connection closed by 147.75.109.163 port 33414 Jul 7 06:07:46.775401 sshd-session[5919]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:46.782415 systemd[1]: sshd@13-172.236.119.245:22-147.75.109.163:33414.service: Deactivated successfully. Jul 7 06:07:46.784948 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:07:46.785869 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:07:46.787489 systemd-logind[1528]: Removed session 14. Jul 7 06:07:46.838315 systemd[1]: Started sshd@14-172.236.119.245:22-147.75.109.163:56756.service - OpenSSH per-connection server daemon (147.75.109.163:56756). Jul 7 06:07:47.121525 kubelet[2727]: E0707 06:07:47.121384 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:07:47.121525 kubelet[2727]: E0707 06:07:47.121443 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:07:47.192068 sshd[5931]: Accepted publickey for core from 147.75.109.163 port 56756 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:47.193988 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:47.200212 systemd-logind[1528]: New session 15 of user core. Jul 7 06:07:47.202824 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:07:48.059072 sshd[5933]: Connection closed by 147.75.109.163 port 56756 Jul 7 06:07:48.059829 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:48.064340 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:07:48.065032 systemd[1]: sshd@14-172.236.119.245:22-147.75.109.163:56756.service: Deactivated successfully. Jul 7 06:07:48.070118 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:07:48.075014 systemd-logind[1528]: Removed session 15. Jul 7 06:07:48.122898 systemd[1]: Started sshd@15-172.236.119.245:22-147.75.109.163:56768.service - OpenSSH per-connection server daemon (147.75.109.163:56768). Jul 7 06:07:48.470490 sshd[5951]: Accepted publickey for core from 147.75.109.163 port 56768 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:48.471965 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:48.476769 systemd-logind[1528]: New session 16 of user core. Jul 7 06:07:48.489864 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:07:48.901504 sshd[5954]: Connection closed by 147.75.109.163 port 56768 Jul 7 06:07:48.902887 sshd-session[5951]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:48.907863 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:07:48.908048 systemd[1]: sshd@15-172.236.119.245:22-147.75.109.163:56768.service: Deactivated successfully. Jul 7 06:07:48.911127 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:07:48.914001 systemd-logind[1528]: Removed session 16. Jul 7 06:07:48.969907 systemd[1]: Started sshd@16-172.236.119.245:22-147.75.109.163:56772.service - OpenSSH per-connection server daemon (147.75.109.163:56772). Jul 7 06:07:49.316580 sshd[5964]: Accepted publickey for core from 147.75.109.163 port 56772 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:49.318380 sshd-session[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:49.324577 systemd-logind[1528]: New session 17 of user core. Jul 7 06:07:49.331843 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:07:49.641292 sshd[5966]: Connection closed by 147.75.109.163 port 56772 Jul 7 06:07:49.642095 sshd-session[5964]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:49.647386 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:07:49.648443 systemd[1]: sshd@16-172.236.119.245:22-147.75.109.163:56772.service: Deactivated successfully. Jul 7 06:07:49.651416 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:07:49.653421 systemd-logind[1528]: Removed session 17. Jul 7 06:07:51.149685 containerd[1544]: time="2025-07-07T06:07:51.149638605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9500fbb35c808a738324b0631ac63e92db772c81e96ef05ea9825535fdf7448\" id:\"94347f66f4b27f30a47438d578c3d9f135ca7aff72f0372aab877fb2468ca0bb\" pid:5989 exited_at:{seconds:1751868471 nanos:149202326}" Jul 7 06:07:54.705884 systemd[1]: Started sshd@17-172.236.119.245:22-147.75.109.163:56784.service - OpenSSH per-connection server daemon (147.75.109.163:56784). Jul 7 06:07:55.059244 sshd[6005]: Accepted publickey for core from 147.75.109.163 port 56784 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:07:55.059828 sshd-session[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:55.064974 systemd-logind[1528]: New session 18 of user core. Jul 7 06:07:55.071847 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:07:55.375649 sshd[6007]: Connection closed by 147.75.109.163 port 56784 Jul 7 06:07:55.377990 sshd-session[6005]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:55.383420 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:07:55.384621 systemd[1]: sshd@17-172.236.119.245:22-147.75.109.163:56784.service: Deactivated successfully. Jul 7 06:07:55.387256 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:07:55.389623 systemd-logind[1528]: Removed session 18. Jul 7 06:08:00.114404 kubelet[2727]: E0707 06:08:00.113945 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jul 7 06:08:00.442747 systemd[1]: Started sshd@18-172.236.119.245:22-147.75.109.163:55388.service - OpenSSH per-connection server daemon (147.75.109.163:55388). Jul 7 06:08:00.788517 sshd[6020]: Accepted publickey for core from 147.75.109.163 port 55388 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:08:00.789889 sshd-session[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:00.795516 systemd-logind[1528]: New session 19 of user core. Jul 7 06:08:00.802829 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:08:01.105605 sshd[6022]: Connection closed by 147.75.109.163 port 55388 Jul 7 06:08:01.106930 sshd-session[6020]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:01.111390 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:08:01.112168 systemd[1]: sshd@18-172.236.119.245:22-147.75.109.163:55388.service: Deactivated successfully. Jul 7 06:08:01.115910 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:08:01.117933 systemd-logind[1528]: Removed session 19. Jul 7 06:08:03.093129 containerd[1544]: time="2025-07-07T06:08:03.093078589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dec3c4b9187cfbc0c3ecec6df0edada53c6b6c48276864ebb454aaa1f6d5aea\" id:\"572ae130d8c5ccccf0ae692d26ae3b07600a09a577f93e9fb8e33853c4853834\" pid:6046 exited_at:{seconds:1751868483 nanos:92534261}" Jul 7 06:08:04.412291 containerd[1544]: time="2025-07-07T06:08:04.412252947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af701b1a93951f9a4c4266caccafeaa14381bf16a1a238e15296554459872778\" id:\"4984505367a112554d7779eb0b2f00f87911c316656f376abd92e0a30dff1e80\" pid:6067 exited_at:{seconds:1751868484 nanos:412114479}"