Jul 7 06:11:50.851515 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:11:50.851538 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:11:50.851546 kernel: BIOS-provided physical RAM map: Jul 7 06:11:50.851555 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jul 7 06:11:50.851560 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jul 7 06:11:50.851566 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 06:11:50.851572 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 7 06:11:50.851578 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 7 06:11:50.851583 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 06:11:50.851589 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 06:11:50.851594 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:11:50.851600 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 06:11:50.851607 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jul 7 06:11:50.851613 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:11:50.851620 kernel: NX (Execute Disable) protection: active Jul 7 06:11:50.851626 kernel: APIC: Static calls initialized Jul 7 06:11:50.851632 kernel: SMBIOS 2.8 present. Jul 7 06:11:50.851639 kernel: DMI: Linode Compute Instance, BIOS Not Specified Jul 7 06:11:50.851646 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:11:50.851651 kernel: Hypervisor detected: KVM Jul 7 06:11:50.851657 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:11:50.851663 kernel: kvm-clock: using sched offset of 6143401690 cycles Jul 7 06:11:50.851669 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:11:50.851675 kernel: tsc: Detected 2000.000 MHz processor Jul 7 06:11:50.851682 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:11:50.851688 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:11:50.851694 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jul 7 06:11:50.851702 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 06:11:50.851709 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:11:50.851715 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 7 06:11:50.851721 kernel: Using GB pages for direct mapping Jul 7 06:11:50.851727 kernel: ACPI: Early table checksum verification disabled Jul 7 06:11:50.851733 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) Jul 7 06:11:50.851739 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:11:50.851745 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:11:50.851752 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:11:50.851760 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 7 06:11:50.851766 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:11:50.851772 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:11:50.851778 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:11:50.851787 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:11:50.851794 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jul 7 06:11:50.851802 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jul 7 06:11:50.851809 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 7 06:11:50.851815 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jul 7 06:11:50.851822 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jul 7 06:11:50.851828 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jul 7 06:11:50.851834 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jul 7 06:11:50.851840 kernel: No NUMA configuration found Jul 7 06:11:50.851847 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jul 7 06:11:50.851855 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jul 7 06:11:50.851862 kernel: Zone ranges: Jul 7 06:11:50.851868 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:11:50.851874 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 06:11:50.851881 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jul 7 06:11:50.851887 kernel: Device empty Jul 7 06:11:50.851893 kernel: Movable zone start for each node Jul 7 06:11:50.851899 kernel: Early memory node ranges Jul 7 06:11:50.851906 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 06:11:50.851912 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 7 06:11:50.851921 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jul 7 06:11:50.851927 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jul 7 06:11:50.851933 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:11:50.851940 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 06:11:50.851946 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 7 06:11:50.851952 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:11:50.851959 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:11:50.851965 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:11:50.851971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:11:50.851980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:11:50.851986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:11:50.851992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:11:50.851999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:11:50.852005 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:11:50.852011 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:11:50.852017 kernel: TSC deadline timer available Jul 7 06:11:50.852024 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:11:50.852030 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:11:50.852038 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:11:50.852044 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:11:50.852050 kernel: CPU topo: Num. cores per package: 2 Jul 7 06:11:50.852057 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:11:50.852063 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:11:50.852069 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:11:50.852075 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:11:50.852082 kernel: kvm-guest: setup PV sched yield Jul 7 06:11:50.852088 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 06:11:50.852096 kernel: Booting paravirtualized kernel on KVM Jul 7 06:11:50.852103 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:11:50.852109 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:11:50.852148 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:11:50.852156 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:11:50.852162 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:11:50.852188 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:11:50.852194 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:11:50.852202 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:11:50.852212 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:11:50.852218 kernel: random: crng init done Jul 7 06:11:50.852224 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:11:50.852231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:11:50.852237 kernel: Fallback order for Node 0: 0 Jul 7 06:11:50.852248 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 7 06:11:50.852255 kernel: Policy zone: Normal Jul 7 06:11:50.852261 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:11:50.852269 kernel: software IO TLB: area num 2. Jul 7 06:11:50.852276 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:11:50.852282 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:11:50.852289 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:11:50.852295 kernel: Dynamic Preempt: voluntary Jul 7 06:11:50.852301 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:11:50.852308 kernel: rcu: RCU event tracing is enabled. Jul 7 06:11:50.852315 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:11:50.852322 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:11:50.852330 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:11:50.852337 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:11:50.852343 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:11:50.852349 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:11:50.852356 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:11:50.852369 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:11:50.852378 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:11:50.852385 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 06:11:50.852391 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:11:50.852398 kernel: Console: colour VGA+ 80x25 Jul 7 06:11:50.852405 kernel: printk: legacy console [tty0] enabled Jul 7 06:11:50.852411 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:11:50.852420 kernel: ACPI: Core revision 20240827 Jul 7 06:11:50.852427 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:11:50.852433 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:11:50.852440 kernel: x2apic enabled Jul 7 06:11:50.852447 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:11:50.852455 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:11:50.852462 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:11:50.852469 kernel: kvm-guest: setup PV IPIs Jul 7 06:11:50.852475 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:11:50.852482 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jul 7 06:11:50.852489 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jul 7 06:11:50.852496 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:11:50.852502 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:11:50.852509 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:11:50.852517 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:11:50.852524 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:11:50.852531 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:11:50.852537 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 06:11:50.852544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:11:50.852551 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:11:50.852558 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:11:50.852565 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:11:50.852574 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:11:50.852581 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:11:50.852587 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:11:50.852594 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:11:50.852601 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 06:11:50.852607 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:11:50.852614 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jul 7 06:11:50.852621 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jul 7 06:11:50.852627 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:11:50.852636 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:11:50.852643 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:11:50.852649 kernel: landlock: Up and running. Jul 7 06:11:50.852656 kernel: SELinux: Initializing. Jul 7 06:11:50.852662 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:11:50.852669 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:11:50.852676 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jul 7 06:11:50.852682 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:11:50.852689 kernel: ... version: 0 Jul 7 06:11:50.852697 kernel: ... bit width: 48 Jul 7 06:11:50.852704 kernel: ... generic registers: 6 Jul 7 06:11:50.852711 kernel: ... value mask: 0000ffffffffffff Jul 7 06:11:50.852718 kernel: ... max period: 00007fffffffffff Jul 7 06:11:50.852725 kernel: ... fixed-purpose events: 0 Jul 7 06:11:50.852731 kernel: ... event mask: 000000000000003f Jul 7 06:11:50.852738 kernel: signal: max sigframe size: 3376 Jul 7 06:11:50.852744 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:11:50.852751 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:11:50.852760 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:11:50.852766 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:11:50.852773 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:11:50.852779 kernel: .... node #0, CPUs: #1 Jul 7 06:11:50.852786 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:11:50.852793 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jul 7 06:11:50.852800 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 227296K reserved, 0K cma-reserved) Jul 7 06:11:50.852806 kernel: devtmpfs: initialized Jul 7 06:11:50.852813 kernel: x86/mm: Memory block size: 128MB Jul 7 06:11:50.852821 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:11:50.852828 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:11:50.852835 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:11:50.852841 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:11:50.852848 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:11:50.852854 kernel: audit: type=2000 audit(1751868709.333:1): state=initialized audit_enabled=0 res=1 Jul 7 06:11:50.852861 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:11:50.852868 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:11:50.852874 kernel: cpuidle: using governor menu Jul 7 06:11:50.852883 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:11:50.852889 kernel: dca service started, version 1.12.1 Jul 7 06:11:50.852896 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 06:11:50.852903 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 06:11:50.852909 kernel: PCI: Using configuration type 1 for base access Jul 7 06:11:50.852916 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:11:50.852923 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:11:50.852929 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:11:50.852936 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:11:50.852944 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:11:50.852951 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:11:50.852957 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:11:50.852983 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:11:50.852990 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:11:50.852997 kernel: ACPI: Interpreter enabled Jul 7 06:11:50.853004 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 06:11:50.853010 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:11:50.853017 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:11:50.853026 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:11:50.853037 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:11:50.853043 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:11:50.854431 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:11:50.854554 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:11:50.854663 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:11:50.854673 kernel: PCI host bridge to bus 0000:00 Jul 7 06:11:50.854792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:11:50.854892 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:11:50.854989 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:11:50.855086 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 7 06:11:50.855203 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 06:11:50.855301 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jul 7 06:11:50.855397 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:11:50.855528 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:11:50.855650 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:11:50.855758 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 7 06:11:50.855863 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 7 06:11:50.855968 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 7 06:11:50.856072 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:11:50.858269 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:11:50.858391 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jul 7 06:11:50.858498 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 7 06:11:50.858605 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 7 06:11:50.858720 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:11:50.858827 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 7 06:11:50.858932 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 7 06:11:50.859041 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 7 06:11:50.859163 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 7 06:11:50.859283 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:11:50.859404 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:11:50.859521 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:11:50.859628 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jul 7 06:11:50.859733 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jul 7 06:11:50.859850 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:11:50.859956 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 06:11:50.859965 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:11:50.859973 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:11:50.859980 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:11:50.859987 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:11:50.859993 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:11:50.860000 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:11:50.860010 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:11:50.860017 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:11:50.860023 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:11:50.860030 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:11:50.860036 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:11:50.860043 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:11:50.860050 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:11:50.860057 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:11:50.860063 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:11:50.860072 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:11:50.860079 kernel: iommu: Default domain type: Translated Jul 7 06:11:50.860086 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:11:50.860093 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:11:50.860099 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:11:50.860106 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jul 7 06:11:50.860113 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 7 06:11:50.862294 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:11:50.862413 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:11:50.862521 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:11:50.862530 kernel: vgaarb: loaded Jul 7 06:11:50.862538 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:11:50.862544 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:11:50.862551 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:11:50.862558 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:11:50.862565 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:11:50.862572 kernel: pnp: PnP ACPI init Jul 7 06:11:50.862694 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 06:11:50.862705 kernel: pnp: PnP ACPI: found 5 devices Jul 7 06:11:50.862712 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:11:50.862719 kernel: NET: Registered PF_INET protocol family Jul 7 06:11:50.862726 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:11:50.862733 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:11:50.862739 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:11:50.862746 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:11:50.862756 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:11:50.862763 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:11:50.862769 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:11:50.862776 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:11:50.862783 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:11:50.862789 kernel: NET: Registered PF_XDP protocol family Jul 7 06:11:50.862888 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:11:50.862986 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:11:50.863085 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:11:50.863204 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 7 06:11:50.863302 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 06:11:50.863398 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jul 7 06:11:50.863407 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:11:50.863414 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 06:11:50.863421 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jul 7 06:11:50.863428 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jul 7 06:11:50.863434 kernel: Initialise system trusted keyrings Jul 7 06:11:50.863445 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:11:50.863451 kernel: Key type asymmetric registered Jul 7 06:11:50.863458 kernel: Asymmetric key parser 'x509' registered Jul 7 06:11:50.863465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:11:50.863471 kernel: io scheduler mq-deadline registered Jul 7 06:11:50.863478 kernel: io scheduler kyber registered Jul 7 06:11:50.863485 kernel: io scheduler bfq registered Jul 7 06:11:50.863491 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:11:50.863499 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:11:50.863507 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:11:50.863514 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:11:50.863521 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:11:50.863527 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:11:50.863534 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:11:50.863541 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:11:50.863548 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:11:50.863658 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 06:11:50.863760 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 06:11:50.863864 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T06:11:50 UTC (1751868710) Jul 7 06:11:50.863968 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 06:11:50.863978 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:11:50.863985 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:11:50.863992 kernel: Segment Routing with IPv6 Jul 7 06:11:50.863999 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:11:50.864005 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:11:50.864012 kernel: Key type dns_resolver registered Jul 7 06:11:50.864021 kernel: IPI shorthand broadcast: enabled Jul 7 06:11:50.864028 kernel: sched_clock: Marking stable (2701003460, 218041200)->(2951453210, -32408550) Jul 7 06:11:50.864035 kernel: registered taskstats version 1 Jul 7 06:11:50.864041 kernel: Loading compiled-in X.509 certificates Jul 7 06:11:50.864048 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:11:50.864055 kernel: Demotion targets for Node 0: null Jul 7 06:11:50.864061 kernel: Key type .fscrypt registered Jul 7 06:11:50.864068 kernel: Key type fscrypt-provisioning registered Jul 7 06:11:50.864075 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:11:50.864083 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:11:50.864090 kernel: ima: No architecture policies found Jul 7 06:11:50.864096 kernel: clk: Disabling unused clocks Jul 7 06:11:50.864103 kernel: Warning: unable to open an initial console. Jul 7 06:11:50.864110 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:11:50.865324 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:11:50.865332 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:11:50.865339 kernel: Run /init as init process Jul 7 06:11:50.865345 kernel: with arguments: Jul 7 06:11:50.865356 kernel: /init Jul 7 06:11:50.865362 kernel: with environment: Jul 7 06:11:50.865369 kernel: HOME=/ Jul 7 06:11:50.865375 kernel: TERM=linux Jul 7 06:11:50.865382 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:11:50.867156 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:11:50.867170 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:11:50.867181 systemd[1]: Detected virtualization kvm. Jul 7 06:11:50.867188 systemd[1]: Detected architecture x86-64. Jul 7 06:11:50.867195 systemd[1]: Running in initrd. Jul 7 06:11:50.867203 systemd[1]: No hostname configured, using default hostname. Jul 7 06:11:50.867212 systemd[1]: Hostname set to . Jul 7 06:11:50.867220 systemd[1]: Initializing machine ID from random generator. Jul 7 06:11:50.867227 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:11:50.867235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:11:50.867244 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:11:50.867252 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:11:50.867260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:11:50.867268 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:11:50.867276 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:11:50.867285 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:11:50.867292 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:11:50.867302 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:11:50.867309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:11:50.867317 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:11:50.867324 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:11:50.867331 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:11:50.867339 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:11:50.867346 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:11:50.867353 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:11:50.867361 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:11:50.867370 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:11:50.867378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:11:50.867385 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:11:50.867393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:11:50.867400 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:11:50.867408 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:11:50.867417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:11:50.867425 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:11:50.867433 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:11:50.867440 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:11:50.867448 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:11:50.867455 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:11:50.867463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:11:50.867473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:11:50.867481 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:11:50.867511 systemd-journald[206]: Collecting audit messages is disabled. Jul 7 06:11:50.867531 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:11:50.867539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:11:50.867548 systemd-journald[206]: Journal started Jul 7 06:11:50.867565 systemd-journald[206]: Runtime Journal (/run/log/journal/ca825d75b7bc4450a2bf28b708b7dff6) is 8M, max 78.5M, 70.5M free. Jul 7 06:11:50.851181 systemd-modules-load[207]: Inserted module 'overlay' Jul 7 06:11:50.918683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:11:50.918699 kernel: Bridge firewalling registered Jul 7 06:11:50.918709 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:11:50.879806 systemd-modules-load[207]: Inserted module 'br_netfilter' Jul 7 06:11:50.940365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:11:50.942845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:50.946770 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:11:50.950101 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:11:50.954171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:11:50.956196 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:11:50.962231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:11:50.964717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:11:50.969679 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:11:50.973968 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:11:50.975995 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:11:50.978307 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:11:50.979881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:11:50.986227 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:11:50.996805 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:11:51.028553 systemd-resolved[246]: Positive Trust Anchors: Jul 7 06:11:51.029272 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:11:51.029302 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:11:51.034309 systemd-resolved[246]: Defaulting to hostname 'linux'. Jul 7 06:11:51.035264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:11:51.036069 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:11:51.079149 kernel: SCSI subsystem initialized Jul 7 06:11:51.087148 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:11:51.097147 kernel: iscsi: registered transport (tcp) Jul 7 06:11:51.116504 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:11:51.116544 kernel: QLogic iSCSI HBA Driver Jul 7 06:11:51.134390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:11:51.147414 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:11:51.150022 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:11:51.197105 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:11:51.199484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:11:51.253152 kernel: raid6: avx2x4 gen() 30561 MB/s Jul 7 06:11:51.271141 kernel: raid6: avx2x2 gen() 29148 MB/s Jul 7 06:11:51.289464 kernel: raid6: avx2x1 gen() 21216 MB/s Jul 7 06:11:51.289480 kernel: raid6: using algorithm avx2x4 gen() 30561 MB/s Jul 7 06:11:51.308454 kernel: raid6: .... xor() 4774 MB/s, rmw enabled Jul 7 06:11:51.308495 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:11:51.327155 kernel: xor: automatically using best checksumming function avx Jul 7 06:11:51.454148 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:11:51.461608 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:11:51.463782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:11:51.488777 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 7 06:11:51.493697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:11:51.496258 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:11:51.521546 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jul 7 06:11:51.547497 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:11:51.549250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:11:51.607215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:11:51.610606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:11:51.666167 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jul 7 06:11:51.675158 kernel: scsi host0: Virtio SCSI HBA Jul 7 06:11:51.776352 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 7 06:11:51.795137 kernel: libata version 3.00 loaded. Jul 7 06:11:51.804367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:11:51.804539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:51.807312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:11:51.810017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:11:51.816200 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:11:51.818151 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:11:51.835156 kernel: AES CTR mode by8 optimization enabled Jul 7 06:11:51.855141 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 7 06:11:51.856371 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jul 7 06:11:51.857267 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:11:51.857463 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:11:51.857477 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 06:11:51.857624 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 7 06:11:51.857767 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 06:11:51.866922 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:11:51.867068 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:11:51.867224 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:11:51.873188 kernel: scsi host1: ahci Jul 7 06:11:51.873368 kernel: scsi host2: ahci Jul 7 06:11:51.873507 kernel: scsi host3: ahci Jul 7 06:11:51.876154 kernel: scsi host4: ahci Jul 7 06:11:51.876328 kernel: scsi host5: ahci Jul 7 06:11:51.876486 kernel: scsi host6: ahci Jul 7 06:11:51.876625 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Jul 7 06:11:51.876636 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Jul 7 06:11:51.876645 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Jul 7 06:11:51.876654 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Jul 7 06:11:51.876663 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Jul 7 06:11:51.876672 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Jul 7 06:11:51.876680 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:11:51.876692 kernel: GPT:9289727 != 167739391 Jul 7 06:11:51.876920 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:11:51.876936 kernel: GPT:9289727 != 167739391 Jul 7 06:11:51.876945 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:11:51.876954 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:11:51.876963 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 06:11:51.950874 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:52.185334 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:52.185408 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:52.185420 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:52.185430 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:52.185439 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:52.187144 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:52.245657 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 7 06:11:52.254700 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 7 06:11:52.274017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 06:11:52.274856 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:11:52.282444 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 7 06:11:52.283037 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 7 06:11:52.285602 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:11:52.286202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:11:52.287416 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:11:52.290228 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:11:52.294213 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:11:52.305435 disk-uuid[632]: Primary Header is updated. Jul 7 06:11:52.305435 disk-uuid[632]: Secondary Entries is updated. Jul 7 06:11:52.305435 disk-uuid[632]: Secondary Header is updated. Jul 7 06:11:52.310600 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:11:52.315147 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:11:52.328132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:11:53.332289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:11:53.333391 disk-uuid[637]: The operation has completed successfully. Jul 7 06:11:53.389977 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:11:53.390168 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:11:53.420756 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:11:53.436502 sh[654]: Success Jul 7 06:11:53.454957 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:11:53.455000 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:11:53.455631 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:11:53.468193 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:11:53.518210 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:11:53.524207 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:11:53.536239 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:11:53.547155 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:11:53.550142 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (666) Jul 7 06:11:53.553484 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:11:53.553525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:53.555218 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:11:53.567208 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:11:53.568396 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:11:53.569333 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:11:53.570056 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:11:53.574337 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:11:53.604179 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (700) Jul 7 06:11:53.607440 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:53.607828 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:53.609324 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:11:53.620171 kernel: BTRFS info (device sda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:53.621486 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:11:53.623606 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:11:53.698398 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:11:53.708089 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:11:53.741684 ignition[762]: Ignition 2.21.0 Jul 7 06:11:53.742438 ignition[762]: Stage: fetch-offline Jul 7 06:11:53.742478 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:53.742488 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:11:53.742567 ignition[762]: parsed url from cmdline: "" Jul 7 06:11:53.742570 ignition[762]: no config URL provided Jul 7 06:11:53.742575 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:11:53.747186 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:11:53.742583 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:11:53.742588 ignition[762]: failed to fetch config: resource requires networking Jul 7 06:11:53.742920 ignition[762]: Ignition finished successfully Jul 7 06:11:53.762827 systemd-networkd[835]: lo: Link UP Jul 7 06:11:53.762839 systemd-networkd[835]: lo: Gained carrier Jul 7 06:11:53.764485 systemd-networkd[835]: Enumeration completed Jul 7 06:11:53.764594 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:11:53.765556 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:11:53.765561 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:11:53.767398 systemd-networkd[835]: eth0: Link UP Jul 7 06:11:53.767402 systemd-networkd[835]: eth0: Gained carrier Jul 7 06:11:53.767411 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:11:53.767660 systemd[1]: Reached target network.target - Network. Jul 7 06:11:53.770499 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:11:53.813152 ignition[843]: Ignition 2.21.0 Jul 7 06:11:53.813163 ignition[843]: Stage: fetch Jul 7 06:11:53.813947 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:53.813965 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:11:53.814104 ignition[843]: parsed url from cmdline: "" Jul 7 06:11:53.814108 ignition[843]: no config URL provided Jul 7 06:11:53.814130 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:11:53.814148 ignition[843]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:11:53.814179 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 Jul 7 06:11:53.814472 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 7 06:11:54.014663 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 Jul 7 06:11:54.014857 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 7 06:11:54.258209 systemd-networkd[835]: eth0: DHCPv4 address 172.234.200.33/24, gateway 172.234.200.1 acquired from 23.205.167.125 Jul 7 06:11:54.415018 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 Jul 7 06:11:54.505593 ignition[843]: PUT result: OK Jul 7 06:11:54.505677 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 Jul 7 06:11:54.618287 ignition[843]: GET result: OK Jul 7 06:11:54.618437 ignition[843]: parsing config with SHA512: ef1344ac134a3c10af4bda23d064eaf9fe14f5a7d3e435298cc53071f774ef81aafc1b87cd9027b2958af03a8decf76d2d0ee6f7728ff392e77f2e2915612df7 Jul 7 06:11:54.621426 unknown[843]: fetched base config from "system" Jul 7 06:11:54.621436 unknown[843]: fetched base config from "system" Jul 7 06:11:54.621647 ignition[843]: fetch: fetch complete Jul 7 06:11:54.621442 unknown[843]: fetched user config from "akamai" Jul 7 06:11:54.621652 ignition[843]: fetch: fetch passed Jul 7 06:11:54.625810 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:11:54.621696 ignition[843]: Ignition finished successfully Jul 7 06:11:54.629236 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:11:54.659475 ignition[850]: Ignition 2.21.0 Jul 7 06:11:54.670573 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:11:54.664612 ignition[850]: Stage: kargs Jul 7 06:11:54.664781 ignition[850]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:54.664795 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:11:54.665564 ignition[850]: kargs: kargs passed Jul 7 06:11:54.685253 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:11:54.665606 ignition[850]: Ignition finished successfully Jul 7 06:11:54.711533 ignition[857]: Ignition 2.21.0 Jul 7 06:11:54.711543 ignition[857]: Stage: disks Jul 7 06:11:54.711650 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:54.711660 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:11:54.713496 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:11:54.712211 ignition[857]: disks: disks passed Jul 7 06:11:54.715173 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:11:54.712247 ignition[857]: Ignition finished successfully Jul 7 06:11:54.715751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:11:54.716799 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:11:54.717751 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:11:54.718927 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:11:54.721135 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:11:54.747655 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:11:54.751493 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:11:54.754193 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:11:54.864162 kernel: EXT4-fs (sda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:11:54.864848 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:11:54.866182 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:11:54.869570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:11:54.872180 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:11:54.873612 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:11:54.875715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:11:54.876773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:11:54.883612 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:11:54.887290 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:11:54.895155 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (873) Jul 7 06:11:54.895187 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:54.897901 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:54.898521 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:11:54.904496 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:11:54.942407 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:11:54.948286 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:11:54.953600 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:11:54.958797 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:11:55.057979 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:11:55.061585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:11:55.063559 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:11:55.078187 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:11:55.082014 kernel: BTRFS info (device sda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:55.096394 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:11:55.111886 ignition[988]: INFO : Ignition 2.21.0 Jul 7 06:11:55.111886 ignition[988]: INFO : Stage: mount Jul 7 06:11:55.113066 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:55.113066 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:11:55.113066 ignition[988]: INFO : mount: mount passed Jul 7 06:11:55.113066 ignition[988]: INFO : Ignition finished successfully Jul 7 06:11:55.114553 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:11:55.118008 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:11:55.445557 systemd-networkd[835]: eth0: Gained IPv6LL Jul 7 06:11:55.866415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:11:55.890159 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (999) Jul 7 06:11:55.893302 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:55.893376 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:55.896021 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:11:55.900709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:11:55.932244 ignition[1015]: INFO : Ignition 2.21.0 Jul 7 06:11:55.932244 ignition[1015]: INFO : Stage: files Jul 7 06:11:55.933983 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:55.933983 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:11:55.933983 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:11:55.937320 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:11:55.937320 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:11:55.939261 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:11:55.939261 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:11:55.939261 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:11:55.938361 unknown[1015]: wrote ssh authorized keys file for user: core Jul 7 06:11:55.943207 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:11:55.943207 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 06:11:56.223324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:11:56.526182 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:11:56.526182 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:11:56.528449 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:11:56.529798 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:11:56.529798 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:11:56.529798 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:11:56.529798 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:11:56.529798 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:11:56.529798 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:11:56.535348 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:11:56.535348 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:11:56.535348 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:11:56.535348 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:11:56.535348 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:11:56.535348 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 06:11:56.978760 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:11:57.200863 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:11:57.200863 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:11:57.204436 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:11:57.207225 ignition[1015]: INFO : files: files passed Jul 7 06:11:57.207225 ignition[1015]: INFO : Ignition finished successfully Jul 7 06:11:57.208321 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:11:57.210395 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:11:57.218169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:11:57.228435 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:11:57.233270 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:11:57.244023 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:11:57.244023 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:11:57.246610 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:11:57.249795 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:11:57.250625 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:11:57.252686 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:11:57.299568 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:11:57.299716 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:11:57.301057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:11:57.302085 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:11:57.303336 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:11:57.304490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:11:57.342496 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:11:57.344635 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:11:57.365780 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:11:57.366820 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:11:57.368181 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:11:57.369436 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:11:57.369582 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:11:57.370855 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:11:57.371634 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:11:57.372943 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:11:57.374061 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:11:57.375165 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:11:57.376421 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:11:57.377651 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:11:57.378924 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:11:57.380225 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:11:57.381501 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:11:57.382738 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:11:57.383917 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:11:57.384079 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:11:57.385417 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:11:57.386286 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:11:57.387309 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:11:57.387433 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:11:57.388604 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:11:57.388753 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:11:57.390385 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:11:57.390549 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:11:57.391803 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:11:57.391943 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:11:57.395206 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:11:57.396266 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:11:57.396391 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:11:57.399265 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:11:57.400427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:11:57.400977 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:11:57.401787 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:11:57.401930 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:11:57.409607 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:11:57.411501 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:11:57.422325 ignition[1070]: INFO : Ignition 2.21.0 Jul 7 06:11:57.422325 ignition[1070]: INFO : Stage: umount Jul 7 06:11:57.426156 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:57.426156 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 7 06:11:57.426156 ignition[1070]: INFO : umount: umount passed Jul 7 06:11:57.426156 ignition[1070]: INFO : Ignition finished successfully Jul 7 06:11:57.427839 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:11:57.427975 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:11:57.430878 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:11:57.430930 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:11:57.454193 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:11:57.454257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:11:57.455231 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:11:57.455278 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:11:57.456315 systemd[1]: Stopped target network.target - Network. Jul 7 06:11:57.457309 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:11:57.457361 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:11:57.458410 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:11:57.459424 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:11:57.466226 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:11:57.466836 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:11:57.468138 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:11:57.469239 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:11:57.469286 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:11:57.470301 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:11:57.470342 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:11:57.471336 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:11:57.471392 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:11:57.472414 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:11:57.472460 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:11:57.473575 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:11:57.474769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:11:57.477437 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:11:57.478010 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:11:57.478142 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:11:57.485386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:11:57.485494 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:11:57.487794 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:11:57.488180 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:11:57.492602 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:11:57.492879 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:11:57.493003 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:11:57.495178 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:11:57.496072 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:11:57.496969 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:11:57.497013 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:11:57.499016 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:11:57.501462 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:11:57.501526 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:11:57.502107 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:11:57.502173 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:11:57.504000 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:11:57.504048 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:11:57.505262 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:11:57.505314 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:11:57.506234 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:11:57.508770 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:11:57.508839 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:11:57.526465 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:11:57.526637 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:11:57.527719 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:11:57.527782 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:11:57.529792 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:11:57.529832 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:11:57.531005 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:11:57.531055 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:11:57.532738 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:11:57.532786 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:11:57.533955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:11:57.534007 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:11:57.536097 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:11:57.538443 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:11:57.538499 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:11:57.540257 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:11:57.540315 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:11:57.541438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:11:57.541485 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:57.544866 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:11:57.544933 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:11:57.544980 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:11:57.545352 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:11:57.546266 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:11:57.552969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:11:57.553168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:11:57.554876 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:11:57.556560 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:11:57.590141 systemd[1]: Switching root. Jul 7 06:11:57.631795 systemd-journald[206]: Journal stopped Jul 7 06:11:58.780993 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Jul 7 06:11:58.781024 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:11:58.781036 kernel: SELinux: policy capability open_perms=1 Jul 7 06:11:58.781048 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:11:58.781057 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:11:58.781065 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:11:58.781074 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:11:58.781083 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:11:58.781091 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:11:58.781100 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:11:58.781111 kernel: audit: type=1403 audit(1751868717.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:11:58.782161 systemd[1]: Successfully loaded SELinux policy in 59.585ms. Jul 7 06:11:58.782174 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.905ms. Jul 7 06:11:58.782185 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:11:58.782195 systemd[1]: Detected virtualization kvm. Jul 7 06:11:58.782208 systemd[1]: Detected architecture x86-64. Jul 7 06:11:58.782218 systemd[1]: Detected first boot. Jul 7 06:11:58.782228 systemd[1]: Initializing machine ID from random generator. Jul 7 06:11:58.782237 zram_generator::config[1114]: No configuration found. Jul 7 06:11:58.782248 kernel: Guest personality initialized and is inactive Jul 7 06:11:58.782257 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:11:58.782265 kernel: Initialized host personality Jul 7 06:11:58.782276 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:11:58.782286 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:11:58.782297 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:11:58.782307 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:11:58.782316 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:11:58.782326 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:11:58.782356 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:11:58.782370 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:11:58.782380 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:11:58.782389 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:11:58.782399 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:11:58.782408 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:11:58.782419 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:11:58.782429 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:11:58.782440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:11:58.782450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:11:58.782460 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:11:58.782470 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:11:58.782482 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:11:58.782493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:11:58.782503 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:11:58.782512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:11:58.782524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:11:58.782534 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:11:58.782544 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:11:58.782553 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:11:58.782563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:11:58.782573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:11:58.782583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:11:58.782592 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:11:58.782604 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:11:58.782613 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:11:58.782623 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:11:58.782632 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:11:58.782642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:11:58.782655 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:11:58.782664 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:11:58.782674 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:11:58.782684 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:11:58.782694 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:11:58.782704 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:11:58.782713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:58.782723 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:11:58.782734 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:11:58.782744 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:11:58.782754 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:11:58.782764 systemd[1]: Reached target machines.target - Containers. Jul 7 06:11:58.782774 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:11:58.782785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:11:58.782795 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:11:58.782804 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:11:58.782816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:11:58.782826 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:11:58.782835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:11:58.782845 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:11:58.782854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:11:58.782866 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:11:58.782876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:11:58.782886 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:11:58.782896 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:11:58.782907 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:11:58.782918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:11:58.782928 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:11:58.782937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:11:58.782948 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:11:58.782958 kernel: fuse: init (API version 7.41) Jul 7 06:11:58.782967 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:11:58.782977 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:11:58.782989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:11:58.782999 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:11:58.783009 systemd[1]: Stopped verity-setup.service. Jul 7 06:11:58.783019 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:58.783029 kernel: ACPI: bus type drm_connector registered Jul 7 06:11:58.783039 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:11:58.783049 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:11:58.783058 kernel: loop: module loaded Jul 7 06:11:58.783070 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:11:58.783080 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:11:58.783090 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:11:58.783100 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:11:58.783109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:11:58.784627 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:11:58.784641 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:11:58.784651 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:11:58.784661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:11:58.784675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:11:58.784706 systemd-journald[1195]: Collecting audit messages is disabled. Jul 7 06:11:58.784726 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:11:58.784737 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:11:58.784749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:11:58.784759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:11:58.784769 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:11:58.784779 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:11:58.784789 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:11:58.784799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:11:58.784809 systemd-journald[1195]: Journal started Jul 7 06:11:58.784837 systemd-journald[1195]: Runtime Journal (/run/log/journal/47969252a44d44f08066005e674578b0) is 8M, max 78.5M, 70.5M free. Jul 7 06:11:58.406525 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:11:58.416213 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 06:11:58.416665 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:11:58.787959 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:11:58.789593 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:11:58.790608 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:11:58.791568 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:11:58.792528 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:11:58.808872 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:11:58.813194 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:11:58.814847 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:11:58.817258 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:11:58.817291 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:11:58.818782 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:11:58.829221 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:11:58.831474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:11:58.835182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:11:58.839333 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:11:58.840164 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:11:58.842230 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:11:58.842856 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:11:58.843806 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:11:58.846136 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:11:58.849780 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:11:58.854960 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:11:58.855704 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:11:58.865995 systemd-journald[1195]: Time spent on flushing to /var/log/journal/47969252a44d44f08066005e674578b0 is 53.867ms for 995 entries. Jul 7 06:11:58.865995 systemd-journald[1195]: System Journal (/var/log/journal/47969252a44d44f08066005e674578b0) is 8M, max 195.6M, 187.6M free. Jul 7 06:11:58.931179 systemd-journald[1195]: Received client request to flush runtime journal. Jul 7 06:11:58.931226 kernel: loop0: detected capacity change from 0 to 8 Jul 7 06:11:58.931247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:11:58.870576 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:11:58.872539 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:11:58.875298 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:11:58.932838 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:11:58.951730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:11:58.959674 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:11:58.966092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:11:58.967410 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 06:11:58.973252 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:11:58.977066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:11:59.006155 kernel: loop2: detected capacity change from 0 to 221472 Jul 7 06:11:59.024978 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 7 06:11:59.024995 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 7 06:11:59.039267 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:11:59.046231 kernel: loop3: detected capacity change from 0 to 113872 Jul 7 06:11:59.084575 kernel: loop4: detected capacity change from 0 to 8 Jul 7 06:11:59.088145 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 06:11:59.106230 kernel: loop6: detected capacity change from 0 to 221472 Jul 7 06:11:59.133231 kernel: loop7: detected capacity change from 0 to 113872 Jul 7 06:11:59.149135 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jul 7 06:11:59.151766 (sd-merge)[1263]: Merged extensions into '/usr'. Jul 7 06:11:59.156329 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:11:59.156436 systemd[1]: Reloading... Jul 7 06:11:59.238151 zram_generator::config[1289]: No configuration found. Jul 7 06:11:59.356861 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:11:59.427733 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:11:59.428258 systemd[1]: Reloading finished in 271 ms. Jul 7 06:11:59.432179 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:11:59.455145 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:11:59.456427 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:11:59.470250 systemd[1]: Starting ensure-sysext.service... Jul 7 06:11:59.472894 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:11:59.501205 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:11:59.501224 systemd[1]: Reloading... Jul 7 06:11:59.505626 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:11:59.506598 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:11:59.506881 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:11:59.507136 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:11:59.507927 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:11:59.510677 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 7 06:11:59.510789 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 7 06:11:59.516474 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:11:59.516555 systemd-tmpfiles[1333]: Skipping /boot Jul 7 06:11:59.529750 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:11:59.529820 systemd-tmpfiles[1333]: Skipping /boot Jul 7 06:11:59.613150 zram_generator::config[1360]: No configuration found. Jul 7 06:11:59.707011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:11:59.775913 systemd[1]: Reloading finished in 274 ms. Jul 7 06:11:59.795063 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:11:59.807800 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:11:59.816511 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:11:59.820428 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:11:59.824571 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:11:59.831306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:11:59.836435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:11:59.839771 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:11:59.844272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:59.844430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:11:59.846978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:11:59.852209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:11:59.862388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:11:59.868222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:11:59.868390 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:11:59.868544 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:59.873100 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:59.873271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:11:59.873412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:11:59.873484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:11:59.877066 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:11:59.877716 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:59.885710 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:11:59.892456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:11:59.894872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:11:59.895141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:11:59.896542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:11:59.897352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:11:59.906337 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:11:59.906545 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:11:59.913721 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:11:59.924983 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Jul 7 06:11:59.925798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:59.926643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:11:59.928778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:11:59.933368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:11:59.937286 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:11:59.937950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:11:59.937980 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:11:59.938039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:59.938528 systemd[1]: Finished ensure-sysext.service. Jul 7 06:11:59.943890 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:11:59.954787 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:11:59.968491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:11:59.968779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:11:59.970545 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:11:59.976887 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:11:59.977696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:11:59.979679 augenrules[1450]: No rules Jul 7 06:11:59.981584 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:11:59.982800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:11:59.984604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:11:59.985423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:11:59.987521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:11:59.987624 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:11:59.994837 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:11:59.997954 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:12:00.002849 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:12:00.010011 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:12:00.107057 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:12:00.107990 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:12:00.119979 systemd-networkd[1466]: lo: Link UP Jul 7 06:12:00.119995 systemd-networkd[1466]: lo: Gained carrier Jul 7 06:12:00.120984 systemd-networkd[1466]: Enumeration completed Jul 7 06:12:00.121088 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:12:00.127276 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:12:00.130293 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:12:00.148254 systemd-resolved[1408]: Positive Trust Anchors: Jul 7 06:12:00.148563 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:12:00.148637 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:12:00.152854 systemd-resolved[1408]: Defaulting to hostname 'linux'. Jul 7 06:12:00.154420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:12:00.155389 systemd[1]: Reached target network.target - Network. Jul 7 06:12:00.155966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:12:00.156711 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:12:00.157641 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:12:00.158566 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:12:00.159417 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:12:00.160380 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:12:00.161307 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:12:00.162831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:12:00.163793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:12:00.163935 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:12:00.164760 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:12:00.167417 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:12:00.172692 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:12:00.175988 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:12:00.177986 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:12:00.180248 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:12:00.192090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:12:00.193506 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:12:00.195956 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:12:00.197493 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:12:00.210190 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:12:00.211241 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:12:00.234907 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:12:00.234996 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:12:00.240651 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:12:00.248258 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:12:00.252376 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:12:00.253867 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:12:00.257349 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:12:00.262020 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:12:00.262979 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:12:00.267498 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:12:00.271887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:12:00.284242 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:12:00.288568 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:12:00.291036 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jul 7 06:12:00.296602 jq[1505]: false Jul 7 06:12:00.303279 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:12:00.300089 oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jul 7 06:12:00.315312 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:12:00.318830 oslogin_cache_refresh[1507]: Failure getting users, quitting Jul 7 06:12:00.319512 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting users, quitting Jul 7 06:12:00.319512 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:12:00.319512 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing group entry cache Jul 7 06:12:00.319512 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting groups, quitting Jul 7 06:12:00.319512 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:12:00.316841 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:12:00.318847 oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:12:00.318884 oslogin_cache_refresh[1507]: Refreshing group entry cache Jul 7 06:12:00.319366 oslogin_cache_refresh[1507]: Failure getting groups, quitting Jul 7 06:12:00.319377 oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:12:00.325329 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:12:00.328079 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:12:00.338901 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:12:00.352465 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:12:00.353387 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:12:00.353614 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:12:00.353926 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:12:00.354677 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:12:00.362535 extend-filesystems[1506]: Found /dev/sda6 Jul 7 06:12:00.364745 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:12:00.365428 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:12:00.379989 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:12:00.382512 extend-filesystems[1506]: Found /dev/sda9 Jul 7 06:12:00.383787 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:12:00.387788 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:12:00.399643 extend-filesystems[1506]: Checking size of /dev/sda9 Jul 7 06:12:00.406802 jq[1519]: true Jul 7 06:12:00.417148 update_engine[1517]: I20250707 06:12:00.414227 1517 main.cc:92] Flatcar Update Engine starting Jul 7 06:12:00.420733 tar[1528]: linux-amd64/helm Jul 7 06:12:00.430396 extend-filesystems[1506]: Resized partition /dev/sda9 Jul 7 06:12:00.433207 extend-filesystems[1550]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:12:00.441173 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jul 7 06:12:00.443414 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:12:00.451684 jq[1546]: true Jul 7 06:12:00.478194 dbus-daemon[1503]: [system] SELinux support is enabled Jul 7 06:12:00.478796 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:12:00.481942 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:12:00.481973 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:12:00.482740 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:12:00.482763 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:12:00.509334 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:12:00.511807 update_engine[1517]: I20250707 06:12:00.511476 1517 update_check_scheduler.cc:74] Next update check in 3m12s Jul 7 06:12:00.523190 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:12:00.534930 coreos-metadata[1502]: Jul 07 06:12:00.534 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 7 06:12:00.621137 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:12:00.621489 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:12:00.623144 systemd-networkd[1466]: eth0: Link UP Jul 7 06:12:00.624023 systemd-networkd[1466]: eth0: Gained carrier Jul 7 06:12:00.624100 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:12:00.627615 systemd-logind[1514]: New seat seat0. Jul 7 06:12:00.629999 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:12:00.637333 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:12:00.641787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:12:00.647092 systemd[1]: Starting sshkeys.service... Jul 7 06:12:00.672163 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:12:00.708165 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:12:00.711461 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 06:12:00.715587 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 06:12:00.761397 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:12:00.795802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 7 06:12:00.806684 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jul 7 06:12:00.820519 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:12:00.820543 containerd[1540]: time="2025-07-07T06:12:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:12:00.820543 containerd[1540]: time="2025-07-07T06:12:00.817546100Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:12:00.808890 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:12:00.823138 extend-filesystems[1550]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 06:12:00.823138 extend-filesystems[1550]: old_desc_blocks = 1, new_desc_blocks = 10 Jul 7 06:12:00.823138 extend-filesystems[1550]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jul 7 06:12:00.827068 extend-filesystems[1506]: Resized filesystem in /dev/sda9 Jul 7 06:12:00.824970 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:12:00.826333 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858021640Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.08µs" Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858415960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858440390Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858599570Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858613110Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858636340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858695090Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858704750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858948100Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858961570Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858971970Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:12:00.860969 containerd[1540]: time="2025-07-07T06:12:00.858978710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:12:00.861234 containerd[1540]: time="2025-07-07T06:12:00.859070960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:12:00.861234 containerd[1540]: time="2025-07-07T06:12:00.859683680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:12:00.863589 containerd[1540]: time="2025-07-07T06:12:00.861962550Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:12:00.863589 containerd[1540]: time="2025-07-07T06:12:00.861986000Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:12:00.863589 containerd[1540]: time="2025-07-07T06:12:00.862032390Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:12:00.863589 containerd[1540]: time="2025-07-07T06:12:00.862342150Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:12:00.863589 containerd[1540]: time="2025-07-07T06:12:00.862409880Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865001070Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865051830Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865065410Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865076180Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865086720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865187600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865203000Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865213580Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865222140Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865230870Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865246080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:12:00.865304 containerd[1540]: time="2025-07-07T06:12:00.865260990Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865364620Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865383930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865397450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865407800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865417090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865427500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865437610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865446800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865456980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865466730Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:12:00.865516 containerd[1540]: time="2025-07-07T06:12:00.865475980Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:12:00.865688 containerd[1540]: time="2025-07-07T06:12:00.865538740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:12:00.865688 containerd[1540]: time="2025-07-07T06:12:00.865556950Z" level=info msg="Start snapshots syncer" Jul 7 06:12:00.865615 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:12:00.865801 containerd[1540]: time="2025-07-07T06:12:00.865690300Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:12:00.868755 containerd[1540]: time="2025-07-07T06:12:00.865910380Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:12:00.868755 containerd[1540]: time="2025-07-07T06:12:00.865953600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868351920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868472520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868491960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868502160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868512880Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868524890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868533850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868543340Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868564260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868573270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:12:00.868895 containerd[1540]: time="2025-07-07T06:12:00.868582010Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:12:00.869067 containerd[1540]: time="2025-07-07T06:12:00.869032360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:12:00.869067 containerd[1540]: time="2025-07-07T06:12:00.869050160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:12:00.869067 containerd[1540]: time="2025-07-07T06:12:00.869058320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869239820Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869261220Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869270910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869281900Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869299040Z" level=info msg="runtime interface created" Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869304250Z" level=info msg="created NRI interface" Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869311430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869322140Z" level=info msg="Connect containerd service" Jul 7 06:12:00.870032 containerd[1540]: time="2025-07-07T06:12:00.869342570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:12:00.870215 containerd[1540]: time="2025-07-07T06:12:00.870191880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:12:00.885221 coreos-metadata[1587]: Jul 07 06:12:00.885 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 7 06:12:00.981769 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:12:00.982055 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.066762550Z" level=info msg="Start subscribing containerd event" Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.067609970Z" level=info msg="Start recovering state" Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.068197970Z" level=info msg="Start event monitor" Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.068683120Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.068697390Z" level=info msg="Start streaming server" Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.068714420Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.068723710Z" level=info msg="runtime interface starting up..." Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.068729820Z" level=info msg="starting plugins..." Jul 7 06:12:01.069912 containerd[1540]: time="2025-07-07T06:12:01.069224430Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:12:01.070408 containerd[1540]: time="2025-07-07T06:12:01.070263860Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:12:01.070612 containerd[1540]: time="2025-07-07T06:12:01.070579800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:12:01.070727 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:12:01.073504 containerd[1540]: time="2025-07-07T06:12:01.072440660Z" level=info msg="containerd successfully booted in 0.262948s" Jul 7 06:12:01.103045 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:12:01.119185 systemd-networkd[1466]: eth0: DHCPv4 address 172.234.200.33/24, gateway 172.234.200.1 acquired from 23.205.167.125 Jul 7 06:12:01.120094 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1466 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 06:12:01.120281 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jul 7 06:12:01.125310 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 06:12:01.149731 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:12:01.159450 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:12:01.180279 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:12:01.212291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:12:01.228378 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:12:01.230227 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:12:01.240980 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:12:01.244761 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:12:01.257239 systemd-logind[1514]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:12:01.285956 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:12:01.293209 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:12:01.298303 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:12:01.299032 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:12:01.371699 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 06:12:01.372598 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 06:12:01.375203 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1630 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 06:12:01.426408 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 06:12:01.490073 tar[1528]: linux-amd64/LICENSE Jul 7 06:12:01.491640 tar[1528]: linux-amd64/README.md Jul 7 06:12:01.525707 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:12:01.553489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:12:01.556333 coreos-metadata[1502]: Jul 07 06:12:01.556 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 7 06:12:01.561630 polkitd[1648]: Started polkitd version 126 Jul 7 06:12:01.566083 polkitd[1648]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 06:12:01.566391 polkitd[1648]: Loading rules from directory /run/polkit-1/rules.d Jul 7 06:12:01.566441 polkitd[1648]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:12:01.566656 polkitd[1648]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 06:12:01.566683 polkitd[1648]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:12:01.566723 polkitd[1648]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 06:12:01.567441 polkitd[1648]: Finished loading, compiling and executing 2 rules Jul 7 06:12:01.567688 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 06:12:01.568078 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 06:12:01.568561 polkitd[1648]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 06:12:01.578314 systemd-hostnamed[1630]: Hostname set to <172-234-200-33> (transient) Jul 7 06:12:01.579028 systemd-resolved[1408]: System hostname changed to '172-234-200-33'. Jul 7 06:12:01.647540 coreos-metadata[1502]: Jul 07 06:12:01.647 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jul 7 06:12:01.717350 systemd-networkd[1466]: eth0: Gained IPv6LL Jul 7 06:12:01.718446 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jul 7 06:12:01.720944 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:12:01.722316 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:12:01.725736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:01.735415 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:12:01.760020 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:12:01.837422 coreos-metadata[1502]: Jul 07 06:12:01.837 INFO Fetch successful Jul 7 06:12:01.837601 coreos-metadata[1502]: Jul 07 06:12:01.837 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jul 7 06:12:01.896458 coreos-metadata[1587]: Jul 07 06:12:01.896 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 7 06:12:01.985923 coreos-metadata[1587]: Jul 07 06:12:01.985 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jul 7 06:12:02.098358 coreos-metadata[1502]: Jul 07 06:12:02.098 INFO Fetch successful Jul 7 06:12:02.120143 coreos-metadata[1587]: Jul 07 06:12:02.118 INFO Fetch successful Jul 7 06:12:02.143551 update-ssh-keys[1681]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:12:02.146097 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 06:12:02.149737 systemd[1]: Finished sshkeys.service. Jul 7 06:12:02.222491 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:12:02.223819 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:12:02.224916 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jul 7 06:12:02.691021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:02.692407 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:12:02.728970 systemd[1]: Startup finished in 2.784s (kernel) + 7.130s (initrd) + 5.000s (userspace) = 14.915s. Jul 7 06:12:02.739354 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:12:03.267100 kubelet[1705]: E0707 06:12:03.266986 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:12:03.271528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:12:03.271741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:12:03.272500 systemd[1]: kubelet.service: Consumed 893ms CPU time, 265.2M memory peak. Jul 7 06:12:03.320871 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jul 7 06:12:05.161570 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:12:05.163548 systemd[1]: Started sshd@0-172.234.200.33:22-147.75.109.163:59174.service - OpenSSH per-connection server daemon (147.75.109.163:59174). Jul 7 06:12:05.524533 sshd[1717]: Accepted publickey for core from 147.75.109.163 port 59174 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:12:05.526392 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:05.539223 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:12:05.540802 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:12:05.549014 systemd-logind[1514]: New session 1 of user core. Jul 7 06:12:05.561928 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:12:05.565883 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:12:05.579329 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:12:05.582154 systemd-logind[1514]: New session c1 of user core. Jul 7 06:12:05.716639 systemd[1721]: Queued start job for default target default.target. Jul 7 06:12:05.733347 systemd[1721]: Created slice app.slice - User Application Slice. Jul 7 06:12:05.733377 systemd[1721]: Reached target paths.target - Paths. Jul 7 06:12:05.733420 systemd[1721]: Reached target timers.target - Timers. Jul 7 06:12:05.734904 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:12:05.747752 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:12:05.747879 systemd[1721]: Reached target sockets.target - Sockets. Jul 7 06:12:05.747914 systemd[1721]: Reached target basic.target - Basic System. Jul 7 06:12:05.747955 systemd[1721]: Reached target default.target - Main User Target. Jul 7 06:12:05.747986 systemd[1721]: Startup finished in 159ms. Jul 7 06:12:05.748406 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:12:05.758518 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:12:06.015884 systemd[1]: Started sshd@1-172.234.200.33:22-147.75.109.163:59178.service - OpenSSH per-connection server daemon (147.75.109.163:59178). Jul 7 06:12:06.356090 sshd[1732]: Accepted publickey for core from 147.75.109.163 port 59178 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:12:06.358203 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:06.364735 systemd-logind[1514]: New session 2 of user core. Jul 7 06:12:06.374266 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:12:06.603679 sshd[1734]: Connection closed by 147.75.109.163 port 59178 Jul 7 06:12:06.604301 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:06.609281 systemd[1]: sshd@1-172.234.200.33:22-147.75.109.163:59178.service: Deactivated successfully. Jul 7 06:12:06.611677 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:12:06.612484 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:12:06.614095 systemd-logind[1514]: Removed session 2. Jul 7 06:12:06.673869 systemd[1]: Started sshd@2-172.234.200.33:22-147.75.109.163:59188.service - OpenSSH per-connection server daemon (147.75.109.163:59188). Jul 7 06:12:07.023450 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 59188 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:12:07.025187 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:07.030945 systemd-logind[1514]: New session 3 of user core. Jul 7 06:12:07.038247 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:12:07.275681 sshd[1742]: Connection closed by 147.75.109.163 port 59188 Jul 7 06:12:07.276601 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:07.283043 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:12:07.283348 systemd[1]: sshd@2-172.234.200.33:22-147.75.109.163:59188.service: Deactivated successfully. Jul 7 06:12:07.285421 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:12:07.287175 systemd-logind[1514]: Removed session 3. Jul 7 06:12:07.336984 systemd[1]: Started sshd@3-172.234.200.33:22-147.75.109.163:59192.service - OpenSSH per-connection server daemon (147.75.109.163:59192). Jul 7 06:12:07.683006 sshd[1748]: Accepted publickey for core from 147.75.109.163 port 59192 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:12:07.684823 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:07.690394 systemd-logind[1514]: New session 4 of user core. Jul 7 06:12:07.698259 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:12:07.931383 sshd[1750]: Connection closed by 147.75.109.163 port 59192 Jul 7 06:12:07.931814 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:07.935992 systemd[1]: sshd@3-172.234.200.33:22-147.75.109.163:59192.service: Deactivated successfully. Jul 7 06:12:07.938221 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:12:07.940277 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:12:07.941277 systemd-logind[1514]: Removed session 4. Jul 7 06:12:07.994863 systemd[1]: Started sshd@4-172.234.200.33:22-147.75.109.163:59198.service - OpenSSH per-connection server daemon (147.75.109.163:59198). Jul 7 06:12:08.351752 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 59198 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:12:08.353387 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:08.358377 systemd-logind[1514]: New session 5 of user core. Jul 7 06:12:08.365296 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:12:08.560891 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:12:08.561270 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:08.576223 sudo[1759]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:08.628564 sshd[1758]: Connection closed by 147.75.109.163 port 59198 Jul 7 06:12:08.629702 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:08.634367 systemd[1]: sshd@4-172.234.200.33:22-147.75.109.163:59198.service: Deactivated successfully. Jul 7 06:12:08.636458 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:12:08.639506 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:12:08.640775 systemd-logind[1514]: Removed session 5. Jul 7 06:12:08.686653 systemd[1]: Started sshd@5-172.234.200.33:22-147.75.109.163:59200.service - OpenSSH per-connection server daemon (147.75.109.163:59200). Jul 7 06:12:09.020252 sshd[1765]: Accepted publickey for core from 147.75.109.163 port 59200 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:12:09.022228 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:09.026796 systemd-logind[1514]: New session 6 of user core. Jul 7 06:12:09.032253 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:12:09.216572 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:12:09.216886 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:09.222397 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:09.228444 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:12:09.228753 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:09.239931 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:12:09.282926 augenrules[1791]: No rules Jul 7 06:12:09.284279 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:12:09.284780 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:12:09.286929 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:09.336839 sshd[1767]: Connection closed by 147.75.109.163 port 59200 Jul 7 06:12:09.337325 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:09.340946 systemd[1]: sshd@5-172.234.200.33:22-147.75.109.163:59200.service: Deactivated successfully. Jul 7 06:12:09.342798 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:12:09.343873 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:12:09.344916 systemd-logind[1514]: Removed session 6. Jul 7 06:12:09.401429 systemd[1]: Started sshd@6-172.234.200.33:22-147.75.109.163:59210.service - OpenSSH per-connection server daemon (147.75.109.163:59210). Jul 7 06:12:09.752337 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 59210 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:12:09.753994 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:09.758164 systemd-logind[1514]: New session 7 of user core. Jul 7 06:12:09.765238 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:12:09.955832 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:12:09.956146 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:10.242178 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:12:10.257593 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:12:10.445604 dockerd[1821]: time="2025-07-07T06:12:10.445309510Z" level=info msg="Starting up" Jul 7 06:12:10.448031 dockerd[1821]: time="2025-07-07T06:12:10.448004670Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:12:10.473694 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3816582268-merged.mount: Deactivated successfully. Jul 7 06:12:10.498960 dockerd[1821]: time="2025-07-07T06:12:10.498881310Z" level=info msg="Loading containers: start." Jul 7 06:12:10.511151 kernel: Initializing XFRM netlink socket Jul 7 06:12:10.700934 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jul 7 06:12:10.741195 systemd-networkd[1466]: docker0: Link UP Jul 7 06:12:10.744139 dockerd[1821]: time="2025-07-07T06:12:10.744082610Z" level=info msg="Loading containers: done." Jul 7 06:12:10.757901 dockerd[1821]: time="2025-07-07T06:12:10.757831850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:12:10.757901 dockerd[1821]: time="2025-07-07T06:12:10.757889350Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:12:10.758036 dockerd[1821]: time="2025-07-07T06:12:10.757987560Z" level=info msg="Initializing buildkit" Jul 7 06:12:10.776207 dockerd[1821]: time="2025-07-07T06:12:10.776178160Z" level=info msg="Completed buildkit initialization" Jul 7 06:12:10.782134 dockerd[1821]: time="2025-07-07T06:12:10.782098100Z" level=info msg="Daemon has completed initialization" Jul 7 06:12:10.782269 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:12:10.782688 dockerd[1821]: time="2025-07-07T06:12:10.782551440Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:12:12.406867 systemd-resolved[1408]: Clock change detected. Flushing caches. Jul 7 06:12:12.407312 systemd-timesyncd[1442]: Contacted time server [2600:1702:7400:9ac0::314:5a]:123 (2.flatcar.pool.ntp.org). Jul 7 06:12:12.407359 systemd-timesyncd[1442]: Initial clock synchronization to Mon 2025-07-07 06:12:12.406741 UTC. Jul 7 06:12:12.952703 containerd[1540]: time="2025-07-07T06:12:12.952636201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:12:13.080267 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3407824808-merged.mount: Deactivated successfully. Jul 7 06:12:13.621934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608206451.mount: Deactivated successfully. Jul 7 06:12:14.926690 containerd[1540]: time="2025-07-07T06:12:14.926577541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:14.927794 containerd[1540]: time="2025-07-07T06:12:14.927589701Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077750" Jul 7 06:12:14.928532 containerd[1540]: time="2025-07-07T06:12:14.928489631Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:14.931159 containerd[1540]: time="2025-07-07T06:12:14.931119131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:14.932361 containerd[1540]: time="2025-07-07T06:12:14.932321701Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.97963261s" Jul 7 06:12:14.932400 containerd[1540]: time="2025-07-07T06:12:14.932366121Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 06:12:14.933652 containerd[1540]: time="2025-07-07T06:12:14.933595131Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:12:15.131431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:12:15.133698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:15.329048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:15.340188 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:12:15.376241 kubelet[2086]: E0707 06:12:15.376146 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:12:15.381708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:12:15.381940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:12:15.382381 systemd[1]: kubelet.service: Consumed 192ms CPU time, 110.9M memory peak. Jul 7 06:12:15.911974 systemd[1]: Started sshd@7-172.234.200.33:22-221.151.109.150:34396.service - OpenSSH per-connection server daemon (221.151.109.150:34396). Jul 7 06:12:16.522431 containerd[1540]: time="2025-07-07T06:12:16.522355721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:16.523646 containerd[1540]: time="2025-07-07T06:12:16.523618691Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713300" Jul 7 06:12:16.524079 containerd[1540]: time="2025-07-07T06:12:16.523993411Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:16.526152 containerd[1540]: time="2025-07-07T06:12:16.526109371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:16.527274 containerd[1540]: time="2025-07-07T06:12:16.527033771Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.59338401s" Jul 7 06:12:16.527274 containerd[1540]: time="2025-07-07T06:12:16.527072631Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 06:12:16.527677 containerd[1540]: time="2025-07-07T06:12:16.527647011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:12:17.697265 containerd[1540]: time="2025-07-07T06:12:17.697215601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:17.698229 containerd[1540]: time="2025-07-07T06:12:17.698208471Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783677" Jul 7 06:12:17.698956 containerd[1540]: time="2025-07-07T06:12:17.698907071Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:17.704324 containerd[1540]: time="2025-07-07T06:12:17.704299441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:17.704840 containerd[1540]: time="2025-07-07T06:12:17.704796711Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.17712148s" Jul 7 06:12:17.704874 containerd[1540]: time="2025-07-07T06:12:17.704843701Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 06:12:17.705391 containerd[1540]: time="2025-07-07T06:12:17.705307141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:12:17.835744 sshd[2093]: maximum authentication attempts exceeded for root from 221.151.109.150 port 34396 ssh2 [preauth] Jul 7 06:12:17.836299 sshd[2093]: Disconnecting authenticating user root 221.151.109.150 port 34396: Too many authentication failures [preauth] Jul 7 06:12:17.838535 systemd[1]: sshd@7-172.234.200.33:22-221.151.109.150:34396.service: Deactivated successfully. Jul 7 06:12:18.818449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600020362.mount: Deactivated successfully. Jul 7 06:12:19.154462 containerd[1540]: time="2025-07-07T06:12:19.154281741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:19.155502 containerd[1540]: time="2025-07-07T06:12:19.155356081Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383949" Jul 7 06:12:19.155977 containerd[1540]: time="2025-07-07T06:12:19.155947991Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:19.157229 containerd[1540]: time="2025-07-07T06:12:19.157199561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:19.157679 containerd[1540]: time="2025-07-07T06:12:19.157647251Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.45231366s" Jul 7 06:12:19.157722 containerd[1540]: time="2025-07-07T06:12:19.157679391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 06:12:19.158417 containerd[1540]: time="2025-07-07T06:12:19.158382271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:12:19.732259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874177216.mount: Deactivated successfully. Jul 7 06:12:20.547370 containerd[1540]: time="2025-07-07T06:12:20.547314641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:20.548206 containerd[1540]: time="2025-07-07T06:12:20.548148291Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jul 7 06:12:20.548702 containerd[1540]: time="2025-07-07T06:12:20.548677961Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:20.550854 containerd[1540]: time="2025-07-07T06:12:20.550696441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:20.551643 containerd[1540]: time="2025-07-07T06:12:20.551535391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.39312568s" Jul 7 06:12:20.551643 containerd[1540]: time="2025-07-07T06:12:20.551563191Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:12:20.552156 containerd[1540]: time="2025-07-07T06:12:20.552131851Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:12:21.089531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount308615233.mount: Deactivated successfully. Jul 7 06:12:21.094386 containerd[1540]: time="2025-07-07T06:12:21.094345111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:12:21.095139 containerd[1540]: time="2025-07-07T06:12:21.095077071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jul 7 06:12:21.095951 containerd[1540]: time="2025-07-07T06:12:21.095923781Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:12:21.097561 containerd[1540]: time="2025-07-07T06:12:21.097524621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:12:21.098343 containerd[1540]: time="2025-07-07T06:12:21.098233611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 546.07354ms" Jul 7 06:12:21.098343 containerd[1540]: time="2025-07-07T06:12:21.098260181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:12:21.098989 containerd[1540]: time="2025-07-07T06:12:21.098959731Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:12:21.613880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794938113.mount: Deactivated successfully. Jul 7 06:12:22.857096 containerd[1540]: time="2025-07-07T06:12:22.857031971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:22.857956 containerd[1540]: time="2025-07-07T06:12:22.857926181Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780019" Jul 7 06:12:22.858766 containerd[1540]: time="2025-07-07T06:12:22.858734101Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:22.860934 containerd[1540]: time="2025-07-07T06:12:22.860892571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:22.861754 containerd[1540]: time="2025-07-07T06:12:22.861720061Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.76272677s" Jul 7 06:12:22.861796 containerd[1540]: time="2025-07-07T06:12:22.861754881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 06:12:24.959168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:24.959699 systemd[1]: kubelet.service: Consumed 192ms CPU time, 110.9M memory peak. Jul 7 06:12:24.962208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:24.992883 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-7.scope)... Jul 7 06:12:24.992972 systemd[1]: Reloading... Jul 7 06:12:25.135982 zram_generator::config[2303]: No configuration found. Jul 7 06:12:25.217611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:12:25.323898 systemd[1]: Reloading finished in 330 ms. Jul 7 06:12:25.391368 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:12:25.391463 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:12:25.392067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:25.392114 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.3M memory peak. Jul 7 06:12:25.393931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:25.562418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:25.574222 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:12:25.616891 kubelet[2348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:25.616891 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:12:25.616891 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:25.617314 kubelet[2348]: I0707 06:12:25.616966 2348 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:12:25.974869 kubelet[2348]: I0707 06:12:25.974351 2348 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:12:25.974869 kubelet[2348]: I0707 06:12:25.974385 2348 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:12:25.974869 kubelet[2348]: I0707 06:12:25.974764 2348 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:12:26.001111 kubelet[2348]: E0707 06:12:26.001068 2348 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.200.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.200.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:12:26.003451 kubelet[2348]: I0707 06:12:26.003414 2348 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:12:26.011372 kubelet[2348]: I0707 06:12:26.011338 2348 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:12:26.016917 kubelet[2348]: I0707 06:12:26.016895 2348 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:12:26.017661 kubelet[2348]: I0707 06:12:26.017628 2348 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:12:26.017882 kubelet[2348]: I0707 06:12:26.017813 2348 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:12:26.018072 kubelet[2348]: I0707 06:12:26.017872 2348 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-200-33","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:12:26.018171 kubelet[2348]: I0707 06:12:26.018080 2348 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:12:26.018171 kubelet[2348]: I0707 06:12:26.018091 2348 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:12:26.018272 kubelet[2348]: I0707 06:12:26.018253 2348 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:26.021139 kubelet[2348]: I0707 06:12:26.020947 2348 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:12:26.021139 kubelet[2348]: I0707 06:12:26.020979 2348 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:12:26.021139 kubelet[2348]: I0707 06:12:26.021017 2348 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:12:26.021139 kubelet[2348]: I0707 06:12:26.021037 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:12:26.025078 kubelet[2348]: W0707 06:12:26.025043 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.200.33:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-200-33&limit=500&resourceVersion=0": dial tcp 172.234.200.33:6443: connect: connection refused Jul 7 06:12:26.025361 kubelet[2348]: E0707 06:12:26.025343 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.200.33:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-200-33&limit=500&resourceVersion=0\": dial tcp 172.234.200.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:12:26.025499 kubelet[2348]: I0707 06:12:26.025485 2348 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:12:26.026194 kubelet[2348]: I0707 06:12:26.025944 2348 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:12:26.026194 kubelet[2348]: W0707 06:12:26.026022 2348 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:12:26.028754 kubelet[2348]: I0707 06:12:26.028730 2348 server.go:1274] "Started kubelet" Jul 7 06:12:26.029593 kubelet[2348]: W0707 06:12:26.029528 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.200.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.200.33:6443: connect: connection refused Jul 7 06:12:26.029593 kubelet[2348]: E0707 06:12:26.029587 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.200.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.200.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:12:26.029752 kubelet[2348]: I0707 06:12:26.029715 2348 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:12:26.034713 kubelet[2348]: I0707 06:12:26.034679 2348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:12:26.035092 kubelet[2348]: I0707 06:12:26.035078 2348 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:12:26.038346 kubelet[2348]: E0707 06:12:26.037273 2348 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.200.33:6443/api/v1/namespaces/default/events\": dial tcp 172.234.200.33:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-200-33.184fe357bc2677ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-200-33,UID:172-234-200-33,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-200-33,},FirstTimestamp:2025-07-07 06:12:26.028709871 +0000 UTC m=+0.449917421,LastTimestamp:2025-07-07 06:12:26.028709871 +0000 UTC m=+0.449917421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-200-33,}" Jul 7 06:12:26.040303 kubelet[2348]: I0707 06:12:26.039741 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:12:26.040369 kubelet[2348]: I0707 06:12:26.039749 2348 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:12:26.043102 kubelet[2348]: I0707 06:12:26.043067 2348 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:12:26.043324 kubelet[2348]: E0707 06:12:26.043290 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-200-33\" not found" Jul 7 06:12:26.044587 kubelet[2348]: I0707 06:12:26.044569 2348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:12:26.046043 kubelet[2348]: E0707 06:12:26.046015 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-33?timeout=10s\": dial tcp 172.234.200.33:6443: connect: connection refused" interval="200ms" Jul 7 06:12:26.046136 kubelet[2348]: I0707 06:12:26.046126 2348 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:12:26.046401 kubelet[2348]: I0707 06:12:26.046383 2348 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:12:26.046628 kubelet[2348]: I0707 06:12:26.046612 2348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:12:26.049180 kubelet[2348]: I0707 06:12:26.049150 2348 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:12:26.050386 kubelet[2348]: I0707 06:12:26.050371 2348 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:12:26.057105 kubelet[2348]: W0707 06:12:26.057072 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.200.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.200.33:6443: connect: connection refused Jul 7 06:12:26.057241 kubelet[2348]: E0707 06:12:26.057222 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.200.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.200.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:12:26.059884 kubelet[2348]: I0707 06:12:26.059819 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:12:26.060950 kubelet[2348]: I0707 06:12:26.060916 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:12:26.060950 kubelet[2348]: I0707 06:12:26.060942 2348 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:12:26.061002 kubelet[2348]: I0707 06:12:26.060964 2348 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:12:26.061030 kubelet[2348]: E0707 06:12:26.061011 2348 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:12:26.069327 kubelet[2348]: W0707 06:12:26.069261 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.200.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.200.33:6443: connect: connection refused Jul 7 06:12:26.069381 kubelet[2348]: E0707 06:12:26.069342 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.200.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.200.33:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:12:26.079440 kubelet[2348]: E0707 06:12:26.079406 2348 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:12:26.084352 kubelet[2348]: I0707 06:12:26.084317 2348 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:12:26.084352 kubelet[2348]: I0707 06:12:26.084337 2348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:12:26.084422 kubelet[2348]: I0707 06:12:26.084358 2348 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:26.086183 kubelet[2348]: I0707 06:12:26.086161 2348 policy_none.go:49] "None policy: Start" Jul 7 06:12:26.087682 kubelet[2348]: I0707 06:12:26.086976 2348 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:12:26.087682 kubelet[2348]: I0707 06:12:26.087005 2348 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:12:26.101119 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:12:26.114599 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:12:26.118879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:12:26.129858 kubelet[2348]: I0707 06:12:26.129674 2348 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:12:26.130103 kubelet[2348]: I0707 06:12:26.130082 2348 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:12:26.130161 kubelet[2348]: I0707 06:12:26.130101 2348 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:12:26.130398 kubelet[2348]: I0707 06:12:26.130337 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:12:26.133540 kubelet[2348]: E0707 06:12:26.133088 2348 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-200-33\" not found" Jul 7 06:12:26.172294 systemd[1]: Created slice kubepods-burstable-podebbf20442b32345c3cbf056fd5ba5d21.slice - libcontainer container kubepods-burstable-podebbf20442b32345c3cbf056fd5ba5d21.slice. Jul 7 06:12:26.180810 systemd[1]: Created slice kubepods-burstable-pod728627735c322c0bce6edf24f9774b6c.slice - libcontainer container kubepods-burstable-pod728627735c322c0bce6edf24f9774b6c.slice. Jul 7 06:12:26.191050 systemd[1]: Created slice kubepods-burstable-pod67f2eb647cc191db103edb0a591705d3.slice - libcontainer container kubepods-burstable-pod67f2eb647cc191db103edb0a591705d3.slice. Jul 7 06:12:26.232688 kubelet[2348]: I0707 06:12:26.232572 2348 kubelet_node_status.go:72] "Attempting to register node" node="172-234-200-33" Jul 7 06:12:26.234108 kubelet[2348]: E0707 06:12:26.234075 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.200.33:6443/api/v1/nodes\": dial tcp 172.234.200.33:6443: connect: connection refused" node="172-234-200-33" Jul 7 06:12:26.247362 kubelet[2348]: E0707 06:12:26.247329 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-33?timeout=10s\": dial tcp 172.234.200.33:6443: connect: connection refused" interval="400ms" Jul 7 06:12:26.249735 kubelet[2348]: I0707 06:12:26.249704 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-kubeconfig\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:26.249796 kubelet[2348]: I0707 06:12:26.249738 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:26.249796 kubelet[2348]: I0707 06:12:26.249762 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebbf20442b32345c3cbf056fd5ba5d21-ca-certs\") pod \"kube-apiserver-172-234-200-33\" (UID: \"ebbf20442b32345c3cbf056fd5ba5d21\") " pod="kube-system/kube-apiserver-172-234-200-33" Jul 7 06:12:26.249796 kubelet[2348]: I0707 06:12:26.249778 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebbf20442b32345c3cbf056fd5ba5d21-k8s-certs\") pod \"kube-apiserver-172-234-200-33\" (UID: \"ebbf20442b32345c3cbf056fd5ba5d21\") " pod="kube-system/kube-apiserver-172-234-200-33" Jul 7 06:12:26.249796 kubelet[2348]: I0707 06:12:26.249793 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebbf20442b32345c3cbf056fd5ba5d21-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-200-33\" (UID: \"ebbf20442b32345c3cbf056fd5ba5d21\") " pod="kube-system/kube-apiserver-172-234-200-33" Jul 7 06:12:26.250038 kubelet[2348]: I0707 06:12:26.249809 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-ca-certs\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:26.250038 kubelet[2348]: I0707 06:12:26.249824 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-flexvolume-dir\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:26.250038 kubelet[2348]: I0707 06:12:26.249860 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-k8s-certs\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:26.250038 kubelet[2348]: I0707 06:12:26.249880 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67f2eb647cc191db103edb0a591705d3-kubeconfig\") pod \"kube-scheduler-172-234-200-33\" (UID: \"67f2eb647cc191db103edb0a591705d3\") " pod="kube-system/kube-scheduler-172-234-200-33" Jul 7 06:12:26.436079 kubelet[2348]: I0707 06:12:26.436044 2348 kubelet_node_status.go:72] "Attempting to register node" node="172-234-200-33" Jul 7 06:12:26.436799 kubelet[2348]: E0707 06:12:26.436751 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.200.33:6443/api/v1/nodes\": dial tcp 172.234.200.33:6443: connect: connection refused" node="172-234-200-33" Jul 7 06:12:26.479805 kubelet[2348]: E0707 06:12:26.479758 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:26.480616 containerd[1540]: time="2025-07-07T06:12:26.480541341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-200-33,Uid:ebbf20442b32345c3cbf056fd5ba5d21,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:26.486648 kubelet[2348]: E0707 06:12:26.485958 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:26.486819 containerd[1540]: time="2025-07-07T06:12:26.486556681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-200-33,Uid:728627735c322c0bce6edf24f9774b6c,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:26.494283 kubelet[2348]: E0707 06:12:26.494252 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:26.495546 containerd[1540]: time="2025-07-07T06:12:26.495492141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-200-33,Uid:67f2eb647cc191db103edb0a591705d3,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:26.515785 containerd[1540]: time="2025-07-07T06:12:26.512986861Z" level=info msg="connecting to shim eb6660bc68a4fd95733eed908965c395725dfed7d109c079bd2d72905ff09e1f" address="unix:///run/containerd/s/3277cab56a6a6a23b23d220ecb730b0ec176ebdf5a9677f3fb8b9e719bab81a0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:26.546031 systemd[1]: Started cri-containerd-eb6660bc68a4fd95733eed908965c395725dfed7d109c079bd2d72905ff09e1f.scope - libcontainer container eb6660bc68a4fd95733eed908965c395725dfed7d109c079bd2d72905ff09e1f. Jul 7 06:12:26.551112 containerd[1540]: time="2025-07-07T06:12:26.551063411Z" level=info msg="connecting to shim ff28daf3a4f721591a3f44a8da5d5215eb1c715a0fa01af7c3fe7ebf3f57ab99" address="unix:///run/containerd/s/13b48f11c368138f4bc77f2b575de7c346013f3950a715cc2c12c30d186827e4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:26.581354 containerd[1540]: time="2025-07-07T06:12:26.579899381Z" level=info msg="connecting to shim 158d1d148f5b41615b985ffed8f7b3ff52837d7d13e5ed8f3220b9c9eefc19eb" address="unix:///run/containerd/s/a6ed09c26071091bee717d204b847f0db2f61630cbf72918287009d7d99537c4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:26.606988 systemd[1]: Started cri-containerd-ff28daf3a4f721591a3f44a8da5d5215eb1c715a0fa01af7c3fe7ebf3f57ab99.scope - libcontainer container ff28daf3a4f721591a3f44a8da5d5215eb1c715a0fa01af7c3fe7ebf3f57ab99. Jul 7 06:12:26.611768 systemd[1]: Started cri-containerd-158d1d148f5b41615b985ffed8f7b3ff52837d7d13e5ed8f3220b9c9eefc19eb.scope - libcontainer container 158d1d148f5b41615b985ffed8f7b3ff52837d7d13e5ed8f3220b9c9eefc19eb. Jul 7 06:12:26.646613 containerd[1540]: time="2025-07-07T06:12:26.646557501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-200-33,Uid:ebbf20442b32345c3cbf056fd5ba5d21,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb6660bc68a4fd95733eed908965c395725dfed7d109c079bd2d72905ff09e1f\"" Jul 7 06:12:26.648854 kubelet[2348]: E0707 06:12:26.648661 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-33?timeout=10s\": dial tcp 172.234.200.33:6443: connect: connection refused" interval="800ms" Jul 7 06:12:26.650725 kubelet[2348]: E0707 06:12:26.650676 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:26.655958 containerd[1540]: time="2025-07-07T06:12:26.655930621Z" level=info msg="CreateContainer within sandbox \"eb6660bc68a4fd95733eed908965c395725dfed7d109c079bd2d72905ff09e1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:12:26.680800 containerd[1540]: time="2025-07-07T06:12:26.679915571Z" level=info msg="Container c8aa5757276be9501ae0b058f8e0536fc7da06b0b3d7c9832780927b147d298f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:26.688208 containerd[1540]: time="2025-07-07T06:12:26.688159691Z" level=info msg="CreateContainer within sandbox \"eb6660bc68a4fd95733eed908965c395725dfed7d109c079bd2d72905ff09e1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8aa5757276be9501ae0b058f8e0536fc7da06b0b3d7c9832780927b147d298f\"" Jul 7 06:12:26.689056 containerd[1540]: time="2025-07-07T06:12:26.689032791Z" level=info msg="StartContainer for \"c8aa5757276be9501ae0b058f8e0536fc7da06b0b3d7c9832780927b147d298f\"" Jul 7 06:12:26.690467 containerd[1540]: time="2025-07-07T06:12:26.690425221Z" level=info msg="connecting to shim c8aa5757276be9501ae0b058f8e0536fc7da06b0b3d7c9832780927b147d298f" address="unix:///run/containerd/s/3277cab56a6a6a23b23d220ecb730b0ec176ebdf5a9677f3fb8b9e719bab81a0" protocol=ttrpc version=3 Jul 7 06:12:26.703308 containerd[1540]: time="2025-07-07T06:12:26.703199501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-200-33,Uid:728627735c322c0bce6edf24f9774b6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff28daf3a4f721591a3f44a8da5d5215eb1c715a0fa01af7c3fe7ebf3f57ab99\"" Jul 7 06:12:26.707845 kubelet[2348]: E0707 06:12:26.706106 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:26.707909 containerd[1540]: time="2025-07-07T06:12:26.707278821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-200-33,Uid:67f2eb647cc191db103edb0a591705d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"158d1d148f5b41615b985ffed8f7b3ff52837d7d13e5ed8f3220b9c9eefc19eb\"" Jul 7 06:12:26.709104 kubelet[2348]: E0707 06:12:26.709086 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:26.709364 containerd[1540]: time="2025-07-07T06:12:26.709343401Z" level=info msg="CreateContainer within sandbox \"ff28daf3a4f721591a3f44a8da5d5215eb1c715a0fa01af7c3fe7ebf3f57ab99\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:12:26.711985 systemd[1]: Started cri-containerd-c8aa5757276be9501ae0b058f8e0536fc7da06b0b3d7c9832780927b147d298f.scope - libcontainer container c8aa5757276be9501ae0b058f8e0536fc7da06b0b3d7c9832780927b147d298f. Jul 7 06:12:26.712988 containerd[1540]: time="2025-07-07T06:12:26.711237141Z" level=info msg="CreateContainer within sandbox \"158d1d148f5b41615b985ffed8f7b3ff52837d7d13e5ed8f3220b9c9eefc19eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:12:26.721852 containerd[1540]: time="2025-07-07T06:12:26.721204341Z" level=info msg="Container c462bc7b3b46bb3c59332697271e3374d5515e3bcb1275b900da0a27dc745fec: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:26.722524 containerd[1540]: time="2025-07-07T06:12:26.722496611Z" level=info msg="Container 33fea0e83067ff3b1fac5c56c56dc85dcef28c2df7b84f7435c2652f12b4b2e3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:26.727386 containerd[1540]: time="2025-07-07T06:12:26.727351531Z" level=info msg="CreateContainer within sandbox \"ff28daf3a4f721591a3f44a8da5d5215eb1c715a0fa01af7c3fe7ebf3f57ab99\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c462bc7b3b46bb3c59332697271e3374d5515e3bcb1275b900da0a27dc745fec\"" Jul 7 06:12:26.729194 containerd[1540]: time="2025-07-07T06:12:26.728801071Z" level=info msg="CreateContainer within sandbox \"158d1d148f5b41615b985ffed8f7b3ff52837d7d13e5ed8f3220b9c9eefc19eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33fea0e83067ff3b1fac5c56c56dc85dcef28c2df7b84f7435c2652f12b4b2e3\"" Jul 7 06:12:26.729235 containerd[1540]: time="2025-07-07T06:12:26.729210731Z" level=info msg="StartContainer for \"c462bc7b3b46bb3c59332697271e3374d5515e3bcb1275b900da0a27dc745fec\"" Jul 7 06:12:26.729500 containerd[1540]: time="2025-07-07T06:12:26.729248601Z" level=info msg="StartContainer for \"33fea0e83067ff3b1fac5c56c56dc85dcef28c2df7b84f7435c2652f12b4b2e3\"" Jul 7 06:12:26.730850 containerd[1540]: time="2025-07-07T06:12:26.730238451Z" level=info msg="connecting to shim 33fea0e83067ff3b1fac5c56c56dc85dcef28c2df7b84f7435c2652f12b4b2e3" address="unix:///run/containerd/s/a6ed09c26071091bee717d204b847f0db2f61630cbf72918287009d7d99537c4" protocol=ttrpc version=3 Jul 7 06:12:26.730850 containerd[1540]: time="2025-07-07T06:12:26.730585361Z" level=info msg="connecting to shim c462bc7b3b46bb3c59332697271e3374d5515e3bcb1275b900da0a27dc745fec" address="unix:///run/containerd/s/13b48f11c368138f4bc77f2b575de7c346013f3950a715cc2c12c30d186827e4" protocol=ttrpc version=3 Jul 7 06:12:26.751983 systemd[1]: Started cri-containerd-c462bc7b3b46bb3c59332697271e3374d5515e3bcb1275b900da0a27dc745fec.scope - libcontainer container c462bc7b3b46bb3c59332697271e3374d5515e3bcb1275b900da0a27dc745fec. Jul 7 06:12:26.763081 systemd[1]: Started cri-containerd-33fea0e83067ff3b1fac5c56c56dc85dcef28c2df7b84f7435c2652f12b4b2e3.scope - libcontainer container 33fea0e83067ff3b1fac5c56c56dc85dcef28c2df7b84f7435c2652f12b4b2e3. Jul 7 06:12:26.810982 containerd[1540]: time="2025-07-07T06:12:26.810930431Z" level=info msg="StartContainer for \"c8aa5757276be9501ae0b058f8e0536fc7da06b0b3d7c9832780927b147d298f\" returns successfully" Jul 7 06:12:26.839513 kubelet[2348]: I0707 06:12:26.839136 2348 kubelet_node_status.go:72] "Attempting to register node" node="172-234-200-33" Jul 7 06:12:26.839513 kubelet[2348]: E0707 06:12:26.839479 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.200.33:6443/api/v1/nodes\": dial tcp 172.234.200.33:6443: connect: connection refused" node="172-234-200-33" Jul 7 06:12:26.854612 containerd[1540]: time="2025-07-07T06:12:26.854572831Z" level=info msg="StartContainer for \"c462bc7b3b46bb3c59332697271e3374d5515e3bcb1275b900da0a27dc745fec\" returns successfully" Jul 7 06:12:26.870003 containerd[1540]: time="2025-07-07T06:12:26.869963891Z" level=info msg="StartContainer for \"33fea0e83067ff3b1fac5c56c56dc85dcef28c2df7b84f7435c2652f12b4b2e3\" returns successfully" Jul 7 06:12:27.091441 kubelet[2348]: E0707 06:12:27.089162 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:27.093760 kubelet[2348]: E0707 06:12:27.093478 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:27.095039 kubelet[2348]: E0707 06:12:27.094938 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:27.643646 kubelet[2348]: I0707 06:12:27.643610 2348 kubelet_node_status.go:72] "Attempting to register node" node="172-234-200-33" Jul 7 06:12:28.029876 kubelet[2348]: I0707 06:12:28.029839 2348 apiserver.go:52] "Watching apiserver" Jul 7 06:12:28.073790 kubelet[2348]: E0707 06:12:28.073739 2348 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-200-33\" not found" node="172-234-200-33" Jul 7 06:12:28.097433 kubelet[2348]: E0707 06:12:28.097400 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:28.098521 kubelet[2348]: E0707 06:12:28.098497 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:28.109305 kubelet[2348]: I0707 06:12:28.109277 2348 kubelet_node_status.go:75] "Successfully registered node" node="172-234-200-33" Jul 7 06:12:28.109305 kubelet[2348]: E0707 06:12:28.109301 2348 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-234-200-33\": node \"172-234-200-33\" not found" Jul 7 06:12:28.146513 kubelet[2348]: I0707 06:12:28.146471 2348 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:12:30.015331 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-7.scope)... Jul 7 06:12:30.015352 systemd[1]: Reloading... Jul 7 06:12:30.159885 zram_generator::config[2678]: No configuration found. Jul 7 06:12:30.232850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:12:30.351158 systemd[1]: Reloading finished in 335 ms. Jul 7 06:12:30.381791 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:30.404754 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:12:30.405117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:30.405189 systemd[1]: kubelet.service: Consumed 883ms CPU time, 131.4M memory peak. Jul 7 06:12:30.407265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:30.594463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:30.605548 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:12:30.677529 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:30.677529 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:12:30.677529 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:30.679625 kubelet[2715]: I0707 06:12:30.678378 2715 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:12:30.684593 kubelet[2715]: I0707 06:12:30.684575 2715 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:12:30.684673 kubelet[2715]: I0707 06:12:30.684663 2715 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:12:30.684934 kubelet[2715]: I0707 06:12:30.684921 2715 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:12:30.686158 kubelet[2715]: I0707 06:12:30.686133 2715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:12:30.689963 kubelet[2715]: I0707 06:12:30.689928 2715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:12:30.698899 kubelet[2715]: I0707 06:12:30.698878 2715 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:12:30.702710 kubelet[2715]: I0707 06:12:30.702683 2715 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:12:30.702949 kubelet[2715]: I0707 06:12:30.702906 2715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:12:30.703544 kubelet[2715]: I0707 06:12:30.703154 2715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:12:30.703544 kubelet[2715]: I0707 06:12:30.703184 2715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-200-33","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:12:30.703544 kubelet[2715]: I0707 06:12:30.703388 2715 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:12:30.703544 kubelet[2715]: I0707 06:12:30.703401 2715 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:12:30.703778 kubelet[2715]: I0707 06:12:30.703432 2715 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:30.704280 kubelet[2715]: I0707 06:12:30.704268 2715 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:12:30.704359 kubelet[2715]: I0707 06:12:30.704345 2715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:12:30.704454 kubelet[2715]: I0707 06:12:30.704442 2715 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:12:30.704533 kubelet[2715]: I0707 06:12:30.704520 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:12:30.706970 kubelet[2715]: I0707 06:12:30.706956 2715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:12:30.707480 kubelet[2715]: I0707 06:12:30.707462 2715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:12:30.708025 kubelet[2715]: I0707 06:12:30.708011 2715 server.go:1274] "Started kubelet" Jul 7 06:12:30.712299 kubelet[2715]: I0707 06:12:30.712285 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:12:30.720917 kubelet[2715]: I0707 06:12:30.720896 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:12:30.721625 kubelet[2715]: I0707 06:12:30.721611 2715 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:12:30.723713 kubelet[2715]: I0707 06:12:30.723691 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:12:30.724434 kubelet[2715]: I0707 06:12:30.724421 2715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:12:30.724622 kubelet[2715]: I0707 06:12:30.724609 2715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:12:30.726407 kubelet[2715]: I0707 06:12:30.726394 2715 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:12:30.727999 kubelet[2715]: E0707 06:12:30.727982 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-200-33\" not found" Jul 7 06:12:30.732019 kubelet[2715]: I0707 06:12:30.732004 2715 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:12:30.732176 kubelet[2715]: I0707 06:12:30.732165 2715 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:12:30.733205 kubelet[2715]: E0707 06:12:30.733189 2715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:12:30.733380 kubelet[2715]: I0707 06:12:30.733367 2715 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:12:30.733914 kubelet[2715]: I0707 06:12:30.733896 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:12:30.735912 kubelet[2715]: I0707 06:12:30.735898 2715 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:12:30.737189 kubelet[2715]: I0707 06:12:30.737083 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:12:30.742556 kubelet[2715]: I0707 06:12:30.742538 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:12:30.745398 kubelet[2715]: I0707 06:12:30.744865 2715 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:12:30.745398 kubelet[2715]: I0707 06:12:30.744883 2715 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:12:30.745398 kubelet[2715]: E0707 06:12:30.744924 2715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:12:30.789438 kubelet[2715]: I0707 06:12:30.789419 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:12:30.789573 kubelet[2715]: I0707 06:12:30.789561 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:12:30.789662 kubelet[2715]: I0707 06:12:30.789653 2715 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:30.789882 kubelet[2715]: I0707 06:12:30.789868 2715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:12:30.790003 kubelet[2715]: I0707 06:12:30.789982 2715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:12:30.790072 kubelet[2715]: I0707 06:12:30.790063 2715 policy_none.go:49] "None policy: Start" Jul 7 06:12:30.790896 kubelet[2715]: I0707 06:12:30.790816 2715 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:12:30.791921 kubelet[2715]: I0707 06:12:30.791219 2715 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:12:30.791921 kubelet[2715]: I0707 06:12:30.791382 2715 state_mem.go:75] "Updated machine memory state" Jul 7 06:12:30.797409 kubelet[2715]: I0707 06:12:30.797377 2715 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:12:30.797567 kubelet[2715]: I0707 06:12:30.797539 2715 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:12:30.797601 kubelet[2715]: I0707 06:12:30.797557 2715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:12:30.798350 kubelet[2715]: I0707 06:12:30.798284 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:12:30.903728 kubelet[2715]: I0707 06:12:30.903664 2715 kubelet_node_status.go:72] "Attempting to register node" node="172-234-200-33" Jul 7 06:12:30.912260 kubelet[2715]: I0707 06:12:30.912210 2715 kubelet_node_status.go:111] "Node was previously registered" node="172-234-200-33" Jul 7 06:12:30.912422 kubelet[2715]: I0707 06:12:30.912337 2715 kubelet_node_status.go:75] "Successfully registered node" node="172-234-200-33" Jul 7 06:12:31.033724 kubelet[2715]: I0707 06:12:31.033517 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebbf20442b32345c3cbf056fd5ba5d21-k8s-certs\") pod \"kube-apiserver-172-234-200-33\" (UID: \"ebbf20442b32345c3cbf056fd5ba5d21\") " pod="kube-system/kube-apiserver-172-234-200-33" Jul 7 06:12:31.033724 kubelet[2715]: I0707 06:12:31.033555 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebbf20442b32345c3cbf056fd5ba5d21-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-200-33\" (UID: \"ebbf20442b32345c3cbf056fd5ba5d21\") " pod="kube-system/kube-apiserver-172-234-200-33" Jul 7 06:12:31.033724 kubelet[2715]: I0707 06:12:31.033575 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-k8s-certs\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:31.033724 kubelet[2715]: I0707 06:12:31.033590 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67f2eb647cc191db103edb0a591705d3-kubeconfig\") pod \"kube-scheduler-172-234-200-33\" (UID: \"67f2eb647cc191db103edb0a591705d3\") " pod="kube-system/kube-scheduler-172-234-200-33" Jul 7 06:12:31.033724 kubelet[2715]: I0707 06:12:31.033605 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebbf20442b32345c3cbf056fd5ba5d21-ca-certs\") pod \"kube-apiserver-172-234-200-33\" (UID: \"ebbf20442b32345c3cbf056fd5ba5d21\") " pod="kube-system/kube-apiserver-172-234-200-33" Jul 7 06:12:31.034013 kubelet[2715]: I0707 06:12:31.033618 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-ca-certs\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:31.034013 kubelet[2715]: I0707 06:12:31.033632 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-flexvolume-dir\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:31.034013 kubelet[2715]: I0707 06:12:31.033645 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-kubeconfig\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:31.034013 kubelet[2715]: I0707 06:12:31.033661 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/728627735c322c0bce6edf24f9774b6c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-200-33\" (UID: \"728627735c322c0bce6edf24f9774b6c\") " pod="kube-system/kube-controller-manager-172-234-200-33" Jul 7 06:12:31.154494 kubelet[2715]: E0707 06:12:31.154236 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:31.156928 kubelet[2715]: E0707 06:12:31.156813 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:31.156928 kubelet[2715]: E0707 06:12:31.156877 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:31.711436 kubelet[2715]: I0707 06:12:31.711401 2715 apiserver.go:52] "Watching apiserver" Jul 7 06:12:31.733083 kubelet[2715]: I0707 06:12:31.733057 2715 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:12:31.777419 kubelet[2715]: E0707 06:12:31.777391 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:31.777710 kubelet[2715]: E0707 06:12:31.777687 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:31.782366 kubelet[2715]: E0707 06:12:31.782308 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-234-200-33\" already exists" pod="kube-system/kube-apiserver-172-234-200-33" Jul 7 06:12:31.782411 kubelet[2715]: E0707 06:12:31.782403 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:31.794975 kubelet[2715]: I0707 06:12:31.794925 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-200-33" podStartSLOduration=1.794900831 podStartE2EDuration="1.794900831s" podCreationTimestamp="2025-07-07 06:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:31.794248821 +0000 UTC m=+1.182131831" watchObservedRunningTime="2025-07-07 06:12:31.794900831 +0000 UTC m=+1.182783831" Jul 7 06:12:31.799264 kubelet[2715]: I0707 06:12:31.799219 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-200-33" podStartSLOduration=1.799209321 podStartE2EDuration="1.799209321s" podCreationTimestamp="2025-07-07 06:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:31.799081251 +0000 UTC m=+1.186964251" watchObservedRunningTime="2025-07-07 06:12:31.799209321 +0000 UTC m=+1.187092331" Jul 7 06:12:31.804002 kubelet[2715]: I0707 06:12:31.803949 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-200-33" podStartSLOduration=1.803940351 podStartE2EDuration="1.803940351s" podCreationTimestamp="2025-07-07 06:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:31.803664181 +0000 UTC m=+1.191547181" watchObservedRunningTime="2025-07-07 06:12:31.803940351 +0000 UTC m=+1.191823351" Jul 7 06:12:32.778387 kubelet[2715]: E0707 06:12:32.778355 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:33.220896 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 06:12:33.779795 kubelet[2715]: E0707 06:12:33.779753 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:34.781559 kubelet[2715]: E0707 06:12:34.781518 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:35.161210 kubelet[2715]: I0707 06:12:35.161162 2715 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:12:35.161688 containerd[1540]: time="2025-07-07T06:12:35.161515271Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:12:35.162260 kubelet[2715]: I0707 06:12:35.162188 2715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:12:35.760183 systemd[1]: Created slice kubepods-besteffort-pod05c7eeed_8dc9_47a6_9e2e_76f8e2352277.slice - libcontainer container kubepods-besteffort-pod05c7eeed_8dc9_47a6_9e2e_76f8e2352277.slice. Jul 7 06:12:35.765253 kubelet[2715]: I0707 06:12:35.765229 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05c7eeed-8dc9-47a6-9e2e-76f8e2352277-xtables-lock\") pod \"kube-proxy-cfsxw\" (UID: \"05c7eeed-8dc9-47a6-9e2e-76f8e2352277\") " pod="kube-system/kube-proxy-cfsxw" Jul 7 06:12:35.765363 kubelet[2715]: I0707 06:12:35.765258 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05c7eeed-8dc9-47a6-9e2e-76f8e2352277-lib-modules\") pod \"kube-proxy-cfsxw\" (UID: \"05c7eeed-8dc9-47a6-9e2e-76f8e2352277\") " pod="kube-system/kube-proxy-cfsxw" Jul 7 06:12:35.765363 kubelet[2715]: I0707 06:12:35.765284 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05c7eeed-8dc9-47a6-9e2e-76f8e2352277-kube-proxy\") pod \"kube-proxy-cfsxw\" (UID: \"05c7eeed-8dc9-47a6-9e2e-76f8e2352277\") " pod="kube-system/kube-proxy-cfsxw" Jul 7 06:12:35.765363 kubelet[2715]: I0707 06:12:35.765298 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57jw\" (UniqueName: \"kubernetes.io/projected/05c7eeed-8dc9-47a6-9e2e-76f8e2352277-kube-api-access-b57jw\") pod \"kube-proxy-cfsxw\" (UID: \"05c7eeed-8dc9-47a6-9e2e-76f8e2352277\") " pod="kube-system/kube-proxy-cfsxw" Jul 7 06:12:35.829420 kubelet[2715]: E0707 06:12:35.829393 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:35.869977 kubelet[2715]: E0707 06:12:35.869940 2715 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 06:12:35.869977 kubelet[2715]: E0707 06:12:35.869973 2715 projected.go:194] Error preparing data for projected volume kube-api-access-b57jw for pod kube-system/kube-proxy-cfsxw: configmap "kube-root-ca.crt" not found Jul 7 06:12:35.870247 kubelet[2715]: E0707 06:12:35.870027 2715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/05c7eeed-8dc9-47a6-9e2e-76f8e2352277-kube-api-access-b57jw podName:05c7eeed-8dc9-47a6-9e2e-76f8e2352277 nodeName:}" failed. No retries permitted until 2025-07-07 06:12:36.370008021 +0000 UTC m=+5.757891031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b57jw" (UniqueName: "kubernetes.io/projected/05c7eeed-8dc9-47a6-9e2e-76f8e2352277-kube-api-access-b57jw") pod "kube-proxy-cfsxw" (UID: "05c7eeed-8dc9-47a6-9e2e-76f8e2352277") : configmap "kube-root-ca.crt" not found Jul 7 06:12:36.283686 systemd[1]: Created slice kubepods-besteffort-poda0d10c49_05c9_4c77_962d_7c5f7761a1b9.slice - libcontainer container kubepods-besteffort-poda0d10c49_05c9_4c77_962d_7c5f7761a1b9.slice. Jul 7 06:12:36.369822 kubelet[2715]: I0707 06:12:36.369770 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0d10c49-05c9-4c77-962d-7c5f7761a1b9-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-cbqkz\" (UID: \"a0d10c49-05c9-4c77-962d-7c5f7761a1b9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cbqkz" Jul 7 06:12:36.369822 kubelet[2715]: I0707 06:12:36.369807 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmp8\" (UniqueName: \"kubernetes.io/projected/a0d10c49-05c9-4c77-962d-7c5f7761a1b9-kube-api-access-8dmp8\") pod \"tigera-operator-5bf8dfcb4-cbqkz\" (UID: \"a0d10c49-05c9-4c77-962d-7c5f7761a1b9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cbqkz" Jul 7 06:12:36.588976 containerd[1540]: time="2025-07-07T06:12:36.588552471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cbqkz,Uid:a0d10c49-05c9-4c77-962d-7c5f7761a1b9,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:12:36.604551 containerd[1540]: time="2025-07-07T06:12:36.604482111Z" level=info msg="connecting to shim 0cf1bbd6b1ae898526fc0da3f62a923fe2396a33df9c9c38810538bc6bc8e2c7" address="unix:///run/containerd/s/15a9e20b18673520d4d841dd0726065792ded3021412f5a9995a593ad0f3ff30" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:36.632965 systemd[1]: Started cri-containerd-0cf1bbd6b1ae898526fc0da3f62a923fe2396a33df9c9c38810538bc6bc8e2c7.scope - libcontainer container 0cf1bbd6b1ae898526fc0da3f62a923fe2396a33df9c9c38810538bc6bc8e2c7. Jul 7 06:12:36.669152 kubelet[2715]: E0707 06:12:36.669072 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:36.670648 containerd[1540]: time="2025-07-07T06:12:36.670584651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cfsxw,Uid:05c7eeed-8dc9-47a6-9e2e-76f8e2352277,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:36.702852 containerd[1540]: time="2025-07-07T06:12:36.702762861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cbqkz,Uid:a0d10c49-05c9-4c77-962d-7c5f7761a1b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0cf1bbd6b1ae898526fc0da3f62a923fe2396a33df9c9c38810538bc6bc8e2c7\"" Jul 7 06:12:36.705130 containerd[1540]: time="2025-07-07T06:12:36.704734081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:12:36.712433 containerd[1540]: time="2025-07-07T06:12:36.712378851Z" level=info msg="connecting to shim c7fb11783e9e44783b2b9f1e8a6e4b840c57b9b3f3d00f60ce2cfac15404f18c" address="unix:///run/containerd/s/584ffaaabb14b36a1989144d10a87d2df3d68d127d97cd39f749a7389dfc2222" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:36.738976 systemd[1]: Started cri-containerd-c7fb11783e9e44783b2b9f1e8a6e4b840c57b9b3f3d00f60ce2cfac15404f18c.scope - libcontainer container c7fb11783e9e44783b2b9f1e8a6e4b840c57b9b3f3d00f60ce2cfac15404f18c. Jul 7 06:12:36.767751 containerd[1540]: time="2025-07-07T06:12:36.767696761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cfsxw,Uid:05c7eeed-8dc9-47a6-9e2e-76f8e2352277,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7fb11783e9e44783b2b9f1e8a6e4b840c57b9b3f3d00f60ce2cfac15404f18c\"" Jul 7 06:12:36.768643 kubelet[2715]: E0707 06:12:36.768623 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:36.771940 containerd[1540]: time="2025-07-07T06:12:36.771890731Z" level=info msg="CreateContainer within sandbox \"c7fb11783e9e44783b2b9f1e8a6e4b840c57b9b3f3d00f60ce2cfac15404f18c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:12:36.782475 containerd[1540]: time="2025-07-07T06:12:36.782434831Z" level=info msg="Container d53e355040d37037397f736457f75e4e3785b3f8ef8fb236b7c7df9f5d83aa3c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:36.791721 containerd[1540]: time="2025-07-07T06:12:36.791655961Z" level=info msg="CreateContainer within sandbox \"c7fb11783e9e44783b2b9f1e8a6e4b840c57b9b3f3d00f60ce2cfac15404f18c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d53e355040d37037397f736457f75e4e3785b3f8ef8fb236b7c7df9f5d83aa3c\"" Jul 7 06:12:36.792591 containerd[1540]: time="2025-07-07T06:12:36.792570541Z" level=info msg="StartContainer for \"d53e355040d37037397f736457f75e4e3785b3f8ef8fb236b7c7df9f5d83aa3c\"" Jul 7 06:12:36.795919 containerd[1540]: time="2025-07-07T06:12:36.795681551Z" level=info msg="connecting to shim d53e355040d37037397f736457f75e4e3785b3f8ef8fb236b7c7df9f5d83aa3c" address="unix:///run/containerd/s/584ffaaabb14b36a1989144d10a87d2df3d68d127d97cd39f749a7389dfc2222" protocol=ttrpc version=3 Jul 7 06:12:36.818965 systemd[1]: Started cri-containerd-d53e355040d37037397f736457f75e4e3785b3f8ef8fb236b7c7df9f5d83aa3c.scope - libcontainer container d53e355040d37037397f736457f75e4e3785b3f8ef8fb236b7c7df9f5d83aa3c. Jul 7 06:12:36.863678 containerd[1540]: time="2025-07-07T06:12:36.863536281Z" level=info msg="StartContainer for \"d53e355040d37037397f736457f75e4e3785b3f8ef8fb236b7c7df9f5d83aa3c\" returns successfully" Jul 7 06:12:37.794448 kubelet[2715]: E0707 06:12:37.794087 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:37.806473 kubelet[2715]: I0707 06:12:37.806087 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cfsxw" podStartSLOduration=2.806069241 podStartE2EDuration="2.806069241s" podCreationTimestamp="2025-07-07 06:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:37.805937111 +0000 UTC m=+7.193820111" watchObservedRunningTime="2025-07-07 06:12:37.806069241 +0000 UTC m=+7.193952251" Jul 7 06:12:37.997504 containerd[1540]: time="2025-07-07T06:12:37.997441811Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:37.998323 containerd[1540]: time="2025-07-07T06:12:37.998148911Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:12:37.998862 containerd[1540]: time="2025-07-07T06:12:37.998817491Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:38.000414 containerd[1540]: time="2025-07-07T06:12:38.000351431Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:38.000973 containerd[1540]: time="2025-07-07T06:12:38.000934441Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.2961659s" Jul 7 06:12:38.001010 containerd[1540]: time="2025-07-07T06:12:38.000973951Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:12:38.003559 containerd[1540]: time="2025-07-07T06:12:38.003515071Z" level=info msg="CreateContainer within sandbox \"0cf1bbd6b1ae898526fc0da3f62a923fe2396a33df9c9c38810538bc6bc8e2c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:12:38.011407 containerd[1540]: time="2025-07-07T06:12:38.011363001Z" level=info msg="Container 4ea0c4a0e383585cb9d9a7c5b81a10c46a327863ad04ac23c43e2c303d6762b3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:38.016398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965423048.mount: Deactivated successfully. Jul 7 06:12:38.018677 containerd[1540]: time="2025-07-07T06:12:38.018642021Z" level=info msg="CreateContainer within sandbox \"0cf1bbd6b1ae898526fc0da3f62a923fe2396a33df9c9c38810538bc6bc8e2c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ea0c4a0e383585cb9d9a7c5b81a10c46a327863ad04ac23c43e2c303d6762b3\"" Jul 7 06:12:38.019307 containerd[1540]: time="2025-07-07T06:12:38.019266131Z" level=info msg="StartContainer for \"4ea0c4a0e383585cb9d9a7c5b81a10c46a327863ad04ac23c43e2c303d6762b3\"" Jul 7 06:12:38.020540 containerd[1540]: time="2025-07-07T06:12:38.020506711Z" level=info msg="connecting to shim 4ea0c4a0e383585cb9d9a7c5b81a10c46a327863ad04ac23c43e2c303d6762b3" address="unix:///run/containerd/s/15a9e20b18673520d4d841dd0726065792ded3021412f5a9995a593ad0f3ff30" protocol=ttrpc version=3 Jul 7 06:12:38.049954 systemd[1]: Started cri-containerd-4ea0c4a0e383585cb9d9a7c5b81a10c46a327863ad04ac23c43e2c303d6762b3.scope - libcontainer container 4ea0c4a0e383585cb9d9a7c5b81a10c46a327863ad04ac23c43e2c303d6762b3. Jul 7 06:12:38.087866 containerd[1540]: time="2025-07-07T06:12:38.087549231Z" level=info msg="StartContainer for \"4ea0c4a0e383585cb9d9a7c5b81a10c46a327863ad04ac23c43e2c303d6762b3\" returns successfully" Jul 7 06:12:38.416520 kubelet[2715]: E0707 06:12:38.416475 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:38.797978 kubelet[2715]: E0707 06:12:38.797752 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:38.799295 kubelet[2715]: E0707 06:12:38.799240 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:38.814578 kubelet[2715]: I0707 06:12:38.814519 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-cbqkz" podStartSLOduration=1.516515351 podStartE2EDuration="2.814504581s" podCreationTimestamp="2025-07-07 06:12:36 +0000 UTC" firstStartedPulling="2025-07-07 06:12:36.704055111 +0000 UTC m=+6.091938111" lastFinishedPulling="2025-07-07 06:12:38.002044341 +0000 UTC m=+7.389927341" observedRunningTime="2025-07-07 06:12:38.807209861 +0000 UTC m=+8.195092861" watchObservedRunningTime="2025-07-07 06:12:38.814504581 +0000 UTC m=+8.202387581" Jul 7 06:12:43.653041 sudo[1803]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:43.692907 kubelet[2715]: E0707 06:12:43.692862 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:43.707940 sshd[1802]: Connection closed by 147.75.109.163 port 59210 Jul 7 06:12:43.710815 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:43.719568 systemd[1]: sshd@6-172.234.200.33:22-147.75.109.163:59210.service: Deactivated successfully. Jul 7 06:12:43.720289 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:12:43.726818 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:12:43.728210 systemd[1]: session-7.scope: Consumed 3.962s CPU time, 224.7M memory peak. Jul 7 06:12:43.733543 systemd-logind[1514]: Removed session 7. Jul 7 06:12:43.811307 kubelet[2715]: E0707 06:12:43.811257 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:45.836091 kubelet[2715]: E0707 06:12:45.836036 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:47.424934 update_engine[1517]: I20250707 06:12:47.424876 1517 update_attempter.cc:509] Updating boot flags... Jul 7 06:12:47.627742 systemd[1]: Created slice kubepods-besteffort-podc83b5c87_363c_45ca_a802_94370097ded8.slice - libcontainer container kubepods-besteffort-podc83b5c87_363c_45ca_a802_94370097ded8.slice. Jul 7 06:12:47.743411 kubelet[2715]: I0707 06:12:47.743211 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c83b5c87-363c-45ca-a802-94370097ded8-typha-certs\") pod \"calico-typha-68dfc65c8-bs8wl\" (UID: \"c83b5c87-363c-45ca-a802-94370097ded8\") " pod="calico-system/calico-typha-68dfc65c8-bs8wl" Jul 7 06:12:47.743411 kubelet[2715]: I0707 06:12:47.743249 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq764\" (UniqueName: \"kubernetes.io/projected/c83b5c87-363c-45ca-a802-94370097ded8-kube-api-access-lq764\") pod \"calico-typha-68dfc65c8-bs8wl\" (UID: \"c83b5c87-363c-45ca-a802-94370097ded8\") " pod="calico-system/calico-typha-68dfc65c8-bs8wl" Jul 7 06:12:47.743411 kubelet[2715]: I0707 06:12:47.743268 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c83b5c87-363c-45ca-a802-94370097ded8-tigera-ca-bundle\") pod \"calico-typha-68dfc65c8-bs8wl\" (UID: \"c83b5c87-363c-45ca-a802-94370097ded8\") " pod="calico-system/calico-typha-68dfc65c8-bs8wl" Jul 7 06:12:47.933178 kubelet[2715]: E0707 06:12:47.933110 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:47.935244 containerd[1540]: time="2025-07-07T06:12:47.935194666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68dfc65c8-bs8wl,Uid:c83b5c87-363c-45ca-a802-94370097ded8,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:47.965918 systemd[1]: Created slice kubepods-besteffort-pod6eac6bf9_515b_464d_9b62_a1f95e4bd988.slice - libcontainer container kubepods-besteffort-pod6eac6bf9_515b_464d_9b62_a1f95e4bd988.slice. Jul 7 06:12:47.974018 containerd[1540]: time="2025-07-07T06:12:47.973980506Z" level=info msg="connecting to shim 28a01f34469f680aad855a62fad24429c4a97af840dc12800f0f96e28b7a2621" address="unix:///run/containerd/s/8070f1d547acb51b5de531a42c56055bfd059f13af17fc14824a782c9123c2b2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:48.011974 systemd[1]: Started cri-containerd-28a01f34469f680aad855a62fad24429c4a97af840dc12800f0f96e28b7a2621.scope - libcontainer container 28a01f34469f680aad855a62fad24429c4a97af840dc12800f0f96e28b7a2621. Jul 7 06:12:48.047499 kubelet[2715]: I0707 06:12:48.047199 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-cni-bin-dir\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047499 kubelet[2715]: I0707 06:12:48.047235 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6eac6bf9-515b-464d-9b62-a1f95e4bd988-node-certs\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047499 kubelet[2715]: I0707 06:12:48.047253 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-policysync\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047499 kubelet[2715]: I0707 06:12:48.047281 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-cni-log-dir\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047499 kubelet[2715]: I0707 06:12:48.047295 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-xtables-lock\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047713 kubelet[2715]: I0707 06:12:48.047312 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-cni-net-dir\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047713 kubelet[2715]: I0707 06:12:48.047325 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-var-lib-calico\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047713 kubelet[2715]: I0707 06:12:48.047341 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eac6bf9-515b-464d-9b62-a1f95e4bd988-tigera-ca-bundle\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047713 kubelet[2715]: I0707 06:12:48.047355 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-var-run-calico\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.047713 kubelet[2715]: I0707 06:12:48.047376 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn5fw\" (UniqueName: \"kubernetes.io/projected/6eac6bf9-515b-464d-9b62-a1f95e4bd988-kube-api-access-cn5fw\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.049008 kubelet[2715]: I0707 06:12:48.047395 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-flexvol-driver-host\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.049008 kubelet[2715]: I0707 06:12:48.047410 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eac6bf9-515b-464d-9b62-a1f95e4bd988-lib-modules\") pod \"calico-node-5z5ft\" (UID: \"6eac6bf9-515b-464d-9b62-a1f95e4bd988\") " pod="calico-system/calico-node-5z5ft" Jul 7 06:12:48.135563 containerd[1540]: time="2025-07-07T06:12:48.135295307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68dfc65c8-bs8wl,Uid:c83b5c87-363c-45ca-a802-94370097ded8,Namespace:calico-system,Attempt:0,} returns sandbox id \"28a01f34469f680aad855a62fad24429c4a97af840dc12800f0f96e28b7a2621\"" Jul 7 06:12:48.136880 kubelet[2715]: E0707 06:12:48.136435 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:48.137682 containerd[1540]: time="2025-07-07T06:12:48.137663973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:12:48.153772 kubelet[2715]: E0707 06:12:48.153736 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.153772 kubelet[2715]: W0707 06:12:48.153766 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.153898 kubelet[2715]: E0707 06:12:48.153804 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.158242 kubelet[2715]: E0707 06:12:48.158227 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.158686 kubelet[2715]: W0707 06:12:48.158587 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.160066 kubelet[2715]: E0707 06:12:48.160053 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.160239 kubelet[2715]: W0707 06:12:48.160226 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.160519 kubelet[2715]: E0707 06:12:48.160505 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.160598 kubelet[2715]: E0707 06:12:48.160588 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.161071 kubelet[2715]: E0707 06:12:48.160783 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.161071 kubelet[2715]: W0707 06:12:48.161045 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.161348 kubelet[2715]: E0707 06:12:48.161328 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.161861 kubelet[2715]: E0707 06:12:48.161749 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.161942 kubelet[2715]: W0707 06:12:48.161929 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.162248 kubelet[2715]: E0707 06:12:48.162226 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.163704 kubelet[2715]: E0707 06:12:48.163684 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.163907 kubelet[2715]: W0707 06:12:48.163739 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.163907 kubelet[2715]: E0707 06:12:48.163772 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.164133 kubelet[2715]: E0707 06:12:48.164004 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.164133 kubelet[2715]: W0707 06:12:48.164030 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.164133 kubelet[2715]: E0707 06:12:48.164068 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.164596 kubelet[2715]: E0707 06:12:48.164584 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.164760 kubelet[2715]: W0707 06:12:48.164749 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.164914 kubelet[2715]: E0707 06:12:48.164875 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.165474 kubelet[2715]: E0707 06:12:48.165443 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.165474 kubelet[2715]: W0707 06:12:48.165455 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.165691 kubelet[2715]: E0707 06:12:48.165671 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.165955 kubelet[2715]: E0707 06:12:48.165935 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.166126 kubelet[2715]: W0707 06:12:48.166114 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.166214 kubelet[2715]: E0707 06:12:48.166195 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.166751 kubelet[2715]: E0707 06:12:48.166722 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.167589 kubelet[2715]: W0707 06:12:48.166734 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.167665 kubelet[2715]: E0707 06:12:48.167653 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.168113 kubelet[2715]: E0707 06:12:48.168102 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.168207 kubelet[2715]: W0707 06:12:48.168173 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.168346 kubelet[2715]: E0707 06:12:48.168335 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.168649 kubelet[2715]: E0707 06:12:48.168638 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.168734 kubelet[2715]: W0707 06:12:48.168723 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.168810 kubelet[2715]: E0707 06:12:48.168799 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.169416 kubelet[2715]: E0707 06:12:48.169368 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.169416 kubelet[2715]: W0707 06:12:48.169391 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.169525 kubelet[2715]: E0707 06:12:48.169494 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.169888 kubelet[2715]: E0707 06:12:48.169864 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.169888 kubelet[2715]: W0707 06:12:48.169876 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.170039 kubelet[2715]: E0707 06:12:48.170015 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.170282 kubelet[2715]: E0707 06:12:48.170251 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.170282 kubelet[2715]: W0707 06:12:48.170261 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.170798 kubelet[2715]: E0707 06:12:48.170787 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.171062 kubelet[2715]: E0707 06:12:48.171040 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.171062 kubelet[2715]: W0707 06:12:48.171049 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.171266 kubelet[2715]: E0707 06:12:48.171227 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.171726 kubelet[2715]: E0707 06:12:48.171696 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.171813 kubelet[2715]: W0707 06:12:48.171792 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.172196 kubelet[2715]: E0707 06:12:48.172150 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.172462 kubelet[2715]: E0707 06:12:48.172440 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.172462 kubelet[2715]: W0707 06:12:48.172456 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.172642 kubelet[2715]: E0707 06:12:48.172566 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.172701 kubelet[2715]: E0707 06:12:48.172674 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.172701 kubelet[2715]: W0707 06:12:48.172691 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.172796 kubelet[2715]: E0707 06:12:48.172774 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.173023 kubelet[2715]: E0707 06:12:48.172962 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.173023 kubelet[2715]: W0707 06:12:48.172974 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.173758 kubelet[2715]: E0707 06:12:48.173739 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.174003 kubelet[2715]: E0707 06:12:48.173949 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.174032 kubelet[2715]: W0707 06:12:48.174003 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.174032 kubelet[2715]: E0707 06:12:48.174014 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.198308 kubelet[2715]: E0707 06:12:48.198160 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2bv6" podUID="67bc0055-3690-4794-8a33-7fab9a16fcdf" Jul 7 06:12:48.251856 kubelet[2715]: E0707 06:12:48.251794 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.251856 kubelet[2715]: W0707 06:12:48.251820 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.251856 kubelet[2715]: E0707 06:12:48.251860 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.252113 kubelet[2715]: E0707 06:12:48.252080 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.252113 kubelet[2715]: W0707 06:12:48.252088 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.252113 kubelet[2715]: E0707 06:12:48.252097 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.252294 kubelet[2715]: E0707 06:12:48.252255 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.252294 kubelet[2715]: W0707 06:12:48.252272 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.252294 kubelet[2715]: E0707 06:12:48.252280 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.252466 kubelet[2715]: E0707 06:12:48.252430 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.252466 kubelet[2715]: W0707 06:12:48.252446 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.252466 kubelet[2715]: E0707 06:12:48.252455 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.252649 kubelet[2715]: E0707 06:12:48.252613 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.252649 kubelet[2715]: W0707 06:12:48.252629 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.252649 kubelet[2715]: E0707 06:12:48.252637 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.252801 kubelet[2715]: E0707 06:12:48.252777 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.252801 kubelet[2715]: W0707 06:12:48.252791 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.252880 kubelet[2715]: E0707 06:12:48.252817 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.253039 kubelet[2715]: E0707 06:12:48.253006 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.253039 kubelet[2715]: W0707 06:12:48.253024 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.253039 kubelet[2715]: E0707 06:12:48.253032 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.253265 kubelet[2715]: E0707 06:12:48.253243 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.253265 kubelet[2715]: W0707 06:12:48.253257 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.253265 kubelet[2715]: E0707 06:12:48.253265 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.253940 kubelet[2715]: E0707 06:12:48.253908 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.253940 kubelet[2715]: W0707 06:12:48.253924 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.253940 kubelet[2715]: E0707 06:12:48.253933 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.255081 kubelet[2715]: E0707 06:12:48.255057 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.255081 kubelet[2715]: W0707 06:12:48.255076 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.255138 kubelet[2715]: E0707 06:12:48.255090 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.255571 kubelet[2715]: E0707 06:12:48.255549 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.255571 kubelet[2715]: W0707 06:12:48.255568 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.255631 kubelet[2715]: E0707 06:12:48.255577 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.255754 kubelet[2715]: E0707 06:12:48.255734 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.255754 kubelet[2715]: W0707 06:12:48.255747 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.255799 kubelet[2715]: E0707 06:12:48.255756 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.256001 kubelet[2715]: E0707 06:12:48.255982 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.256001 kubelet[2715]: W0707 06:12:48.255995 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.256056 kubelet[2715]: E0707 06:12:48.256004 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.256207 kubelet[2715]: E0707 06:12:48.256188 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.256207 kubelet[2715]: W0707 06:12:48.256201 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.256250 kubelet[2715]: E0707 06:12:48.256209 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.256415 kubelet[2715]: E0707 06:12:48.256397 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.256415 kubelet[2715]: W0707 06:12:48.256409 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.256465 kubelet[2715]: E0707 06:12:48.256418 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.256812 kubelet[2715]: E0707 06:12:48.256795 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.256812 kubelet[2715]: W0707 06:12:48.256807 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.256902 kubelet[2715]: E0707 06:12:48.256814 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.257112 kubelet[2715]: E0707 06:12:48.257092 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.257112 kubelet[2715]: W0707 06:12:48.257108 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.257160 kubelet[2715]: E0707 06:12:48.257116 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.257521 kubelet[2715]: E0707 06:12:48.257483 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.257521 kubelet[2715]: W0707 06:12:48.257502 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.257521 kubelet[2715]: E0707 06:12:48.257511 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.257696 kubelet[2715]: E0707 06:12:48.257674 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.257696 kubelet[2715]: W0707 06:12:48.257691 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.257696 kubelet[2715]: E0707 06:12:48.257699 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.258056 kubelet[2715]: E0707 06:12:48.258031 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.258056 kubelet[2715]: W0707 06:12:48.258046 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.258056 kubelet[2715]: E0707 06:12:48.258054 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.274299 containerd[1540]: time="2025-07-07T06:12:48.273451340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5z5ft,Uid:6eac6bf9-515b-464d-9b62-a1f95e4bd988,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:48.299854 containerd[1540]: time="2025-07-07T06:12:48.298977428Z" level=info msg="connecting to shim a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e" address="unix:///run/containerd/s/6f940f23a57b3c0703bda546f27e6a0d61bbc24ca2790152c2574777c93bfb8e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:48.335013 systemd[1]: Started cri-containerd-a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e.scope - libcontainer container a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e. Jul 7 06:12:48.349445 kubelet[2715]: E0707 06:12:48.349336 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.349445 kubelet[2715]: W0707 06:12:48.349366 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.349445 kubelet[2715]: E0707 06:12:48.349392 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.349445 kubelet[2715]: I0707 06:12:48.349430 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/67bc0055-3690-4794-8a33-7fab9a16fcdf-registration-dir\") pod \"csi-node-driver-v2bv6\" (UID: \"67bc0055-3690-4794-8a33-7fab9a16fcdf\") " pod="calico-system/csi-node-driver-v2bv6" Jul 7 06:12:48.350118 kubelet[2715]: E0707 06:12:48.350088 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.350118 kubelet[2715]: W0707 06:12:48.350107 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.350118 kubelet[2715]: E0707 06:12:48.350117 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.350118 kubelet[2715]: I0707 06:12:48.350132 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/67bc0055-3690-4794-8a33-7fab9a16fcdf-socket-dir\") pod \"csi-node-driver-v2bv6\" (UID: \"67bc0055-3690-4794-8a33-7fab9a16fcdf\") " pod="calico-system/csi-node-driver-v2bv6" Jul 7 06:12:48.350672 kubelet[2715]: E0707 06:12:48.350611 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.350672 kubelet[2715]: W0707 06:12:48.350622 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.350672 kubelet[2715]: E0707 06:12:48.350654 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.350672 kubelet[2715]: I0707 06:12:48.350672 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/67bc0055-3690-4794-8a33-7fab9a16fcdf-varrun\") pod \"csi-node-driver-v2bv6\" (UID: \"67bc0055-3690-4794-8a33-7fab9a16fcdf\") " pod="calico-system/csi-node-driver-v2bv6" Jul 7 06:12:48.351193 kubelet[2715]: E0707 06:12:48.351068 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.351341 kubelet[2715]: W0707 06:12:48.351243 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.351341 kubelet[2715]: E0707 06:12:48.351297 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.351421 kubelet[2715]: I0707 06:12:48.351326 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67bc0055-3690-4794-8a33-7fab9a16fcdf-kubelet-dir\") pod \"csi-node-driver-v2bv6\" (UID: \"67bc0055-3690-4794-8a33-7fab9a16fcdf\") " pod="calico-system/csi-node-driver-v2bv6" Jul 7 06:12:48.351941 kubelet[2715]: E0707 06:12:48.351797 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.351941 kubelet[2715]: W0707 06:12:48.351924 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.352346 kubelet[2715]: E0707 06:12:48.352078 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.352346 kubelet[2715]: I0707 06:12:48.352101 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdlns\" (UniqueName: \"kubernetes.io/projected/67bc0055-3690-4794-8a33-7fab9a16fcdf-kube-api-access-xdlns\") pod \"csi-node-driver-v2bv6\" (UID: \"67bc0055-3690-4794-8a33-7fab9a16fcdf\") " pod="calico-system/csi-node-driver-v2bv6" Jul 7 06:12:48.352482 kubelet[2715]: E0707 06:12:48.352452 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.352564 kubelet[2715]: W0707 06:12:48.352522 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.352645 kubelet[2715]: E0707 06:12:48.352608 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.352943 kubelet[2715]: E0707 06:12:48.352908 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.352943 kubelet[2715]: W0707 06:12:48.352918 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.353201 kubelet[2715]: E0707 06:12:48.353144 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.353478 kubelet[2715]: E0707 06:12:48.353456 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.353478 kubelet[2715]: W0707 06:12:48.353466 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.353775 kubelet[2715]: E0707 06:12:48.353755 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.354137 kubelet[2715]: E0707 06:12:48.354096 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.354676 kubelet[2715]: W0707 06:12:48.354107 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.354895 kubelet[2715]: E0707 06:12:48.354811 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.355240 kubelet[2715]: E0707 06:12:48.355197 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.355240 kubelet[2715]: W0707 06:12:48.355208 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.356950 kubelet[2715]: E0707 06:12:48.356865 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.357428 kubelet[2715]: E0707 06:12:48.357414 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.357561 kubelet[2715]: W0707 06:12:48.357479 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.357561 kubelet[2715]: E0707 06:12:48.357495 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.358753 kubelet[2715]: E0707 06:12:48.358729 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.358894 kubelet[2715]: W0707 06:12:48.358870 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.363901 kubelet[2715]: E0707 06:12:48.362963 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.365419 kubelet[2715]: E0707 06:12:48.364729 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.365419 kubelet[2715]: W0707 06:12:48.364747 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.365419 kubelet[2715]: E0707 06:12:48.364757 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.365419 kubelet[2715]: E0707 06:12:48.365010 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.365419 kubelet[2715]: W0707 06:12:48.365117 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.365419 kubelet[2715]: E0707 06:12:48.365126 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.365419 kubelet[2715]: E0707 06:12:48.365385 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.365419 kubelet[2715]: W0707 06:12:48.365393 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.365419 kubelet[2715]: E0707 06:12:48.365401 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.379183 containerd[1540]: time="2025-07-07T06:12:48.379155995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5z5ft,Uid:6eac6bf9-515b-464d-9b62-a1f95e4bd988,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e\"" Jul 7 06:12:48.453554 kubelet[2715]: E0707 06:12:48.453513 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.453554 kubelet[2715]: W0707 06:12:48.453531 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.453554 kubelet[2715]: E0707 06:12:48.453547 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.453806 kubelet[2715]: E0707 06:12:48.453753 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.453806 kubelet[2715]: W0707 06:12:48.453794 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.453969 kubelet[2715]: E0707 06:12:48.453813 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.454082 kubelet[2715]: E0707 06:12:48.454056 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.454082 kubelet[2715]: W0707 06:12:48.454073 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.454151 kubelet[2715]: E0707 06:12:48.454088 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.454333 kubelet[2715]: E0707 06:12:48.454307 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.454333 kubelet[2715]: W0707 06:12:48.454322 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.454422 kubelet[2715]: E0707 06:12:48.454342 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.454567 kubelet[2715]: E0707 06:12:48.454554 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.454567 kubelet[2715]: W0707 06:12:48.454562 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.454742 kubelet[2715]: E0707 06:12:48.454582 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.454988 kubelet[2715]: E0707 06:12:48.454774 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.454988 kubelet[2715]: W0707 06:12:48.454788 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.454988 kubelet[2715]: E0707 06:12:48.454822 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.455081 kubelet[2715]: E0707 06:12:48.455059 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.455081 kubelet[2715]: W0707 06:12:48.455066 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.455163 kubelet[2715]: E0707 06:12:48.455107 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.455759 kubelet[2715]: E0707 06:12:48.455305 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.455759 kubelet[2715]: W0707 06:12:48.455318 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.455759 kubelet[2715]: E0707 06:12:48.455363 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.455759 kubelet[2715]: E0707 06:12:48.455537 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.455759 kubelet[2715]: W0707 06:12:48.455544 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.455759 kubelet[2715]: E0707 06:12:48.455654 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.456032 kubelet[2715]: E0707 06:12:48.455785 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.456032 kubelet[2715]: W0707 06:12:48.455813 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.456032 kubelet[2715]: E0707 06:12:48.455944 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.456132 kubelet[2715]: E0707 06:12:48.456074 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.456132 kubelet[2715]: W0707 06:12:48.456080 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.456218 kubelet[2715]: E0707 06:12:48.456195 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.456588 kubelet[2715]: E0707 06:12:48.456436 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.456588 kubelet[2715]: W0707 06:12:48.456459 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.456588 kubelet[2715]: E0707 06:12:48.456491 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.456760 kubelet[2715]: E0707 06:12:48.456707 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.456760 kubelet[2715]: W0707 06:12:48.456721 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.457088 kubelet[2715]: E0707 06:12:48.456869 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.457116 kubelet[2715]: E0707 06:12:48.457098 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.457116 kubelet[2715]: W0707 06:12:48.457108 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.457197 kubelet[2715]: E0707 06:12:48.457168 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.457315 kubelet[2715]: E0707 06:12:48.457296 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.457315 kubelet[2715]: W0707 06:12:48.457308 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.457454 kubelet[2715]: E0707 06:12:48.457424 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.457522 kubelet[2715]: E0707 06:12:48.457508 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.457522 kubelet[2715]: W0707 06:12:48.457518 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.457567 kubelet[2715]: E0707 06:12:48.457560 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.457777 kubelet[2715]: E0707 06:12:48.457762 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.457777 kubelet[2715]: W0707 06:12:48.457773 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.458119 kubelet[2715]: E0707 06:12:48.458097 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.458284 kubelet[2715]: E0707 06:12:48.458261 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.458284 kubelet[2715]: W0707 06:12:48.458273 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.458284 kubelet[2715]: E0707 06:12:48.458288 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.458602 kubelet[2715]: E0707 06:12:48.458586 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.458602 kubelet[2715]: W0707 06:12:48.458598 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.458674 kubelet[2715]: E0707 06:12:48.458612 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.459139 kubelet[2715]: E0707 06:12:48.459116 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.459139 kubelet[2715]: W0707 06:12:48.459132 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.459524 kubelet[2715]: E0707 06:12:48.459209 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.459524 kubelet[2715]: E0707 06:12:48.459272 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.459524 kubelet[2715]: W0707 06:12:48.459280 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.459524 kubelet[2715]: E0707 06:12:48.459405 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.459524 kubelet[2715]: W0707 06:12:48.459411 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.459646 kubelet[2715]: E0707 06:12:48.459552 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.459646 kubelet[2715]: W0707 06:12:48.459560 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.459646 kubelet[2715]: E0707 06:12:48.459567 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.459646 kubelet[2715]: E0707 06:12:48.459583 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.460004 kubelet[2715]: E0707 06:12:48.459734 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.460004 kubelet[2715]: W0707 06:12:48.459747 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.460004 kubelet[2715]: E0707 06:12:48.459754 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.460004 kubelet[2715]: E0707 06:12:48.459766 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.460182 kubelet[2715]: E0707 06:12:48.460027 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.460182 kubelet[2715]: W0707 06:12:48.460034 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.460182 kubelet[2715]: E0707 06:12:48.460042 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:48.470001 kubelet[2715]: E0707 06:12:48.469816 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:48.470001 kubelet[2715]: W0707 06:12:48.469853 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:48.470001 kubelet[2715]: E0707 06:12:48.469865 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.394796 containerd[1540]: time="2025-07-07T06:12:49.394739264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:49.395699 containerd[1540]: time="2025-07-07T06:12:49.395563899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:12:49.396199 containerd[1540]: time="2025-07-07T06:12:49.396166149Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:49.397606 containerd[1540]: time="2025-07-07T06:12:49.397570084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:49.398763 containerd[1540]: time="2025-07-07T06:12:49.398136674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.260181647s" Jul 7 06:12:49.398763 containerd[1540]: time="2025-07-07T06:12:49.398182693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:12:49.400667 containerd[1540]: time="2025-07-07T06:12:49.400629880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:12:49.407586 containerd[1540]: time="2025-07-07T06:12:49.407545517Z" level=info msg="CreateContainer within sandbox \"28a01f34469f680aad855a62fad24429c4a97af840dc12800f0f96e28b7a2621\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:12:49.420107 containerd[1540]: time="2025-07-07T06:12:49.419818480Z" level=info msg="Container f8dbe31bc0b083cdfcf5d65c766b0493aef65ed2cb8bf53f05676880ea42bf2d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:49.427322 containerd[1540]: time="2025-07-07T06:12:49.427290978Z" level=info msg="CreateContainer within sandbox \"28a01f34469f680aad855a62fad24429c4a97af840dc12800f0f96e28b7a2621\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f8dbe31bc0b083cdfcf5d65c766b0493aef65ed2cb8bf53f05676880ea42bf2d\"" Jul 7 06:12:49.428097 containerd[1540]: time="2025-07-07T06:12:49.428074484Z" level=info msg="StartContainer for \"f8dbe31bc0b083cdfcf5d65c766b0493aef65ed2cb8bf53f05676880ea42bf2d\"" Jul 7 06:12:49.429729 containerd[1540]: time="2025-07-07T06:12:49.429702615Z" level=info msg="connecting to shim f8dbe31bc0b083cdfcf5d65c766b0493aef65ed2cb8bf53f05676880ea42bf2d" address="unix:///run/containerd/s/8070f1d547acb51b5de531a42c56055bfd059f13af17fc14824a782c9123c2b2" protocol=ttrpc version=3 Jul 7 06:12:49.449969 systemd[1]: Started cri-containerd-f8dbe31bc0b083cdfcf5d65c766b0493aef65ed2cb8bf53f05676880ea42bf2d.scope - libcontainer container f8dbe31bc0b083cdfcf5d65c766b0493aef65ed2cb8bf53f05676880ea42bf2d. Jul 7 06:12:49.518509 containerd[1540]: time="2025-07-07T06:12:49.518443265Z" level=info msg="StartContainer for \"f8dbe31bc0b083cdfcf5d65c766b0493aef65ed2cb8bf53f05676880ea42bf2d\" returns successfully" Jul 7 06:12:49.745821 kubelet[2715]: E0707 06:12:49.745778 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2bv6" podUID="67bc0055-3690-4794-8a33-7fab9a16fcdf" Jul 7 06:12:49.838121 kubelet[2715]: E0707 06:12:49.837913 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:49.847656 kubelet[2715]: I0707 06:12:49.847604 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68dfc65c8-bs8wl" podStartSLOduration=1.5855742080000002 podStartE2EDuration="2.847591632s" podCreationTimestamp="2025-07-07 06:12:47 +0000 UTC" firstStartedPulling="2025-07-07 06:12:48.137418937 +0000 UTC m=+17.525301937" lastFinishedPulling="2025-07-07 06:12:49.399436361 +0000 UTC m=+18.787319361" observedRunningTime="2025-07-07 06:12:49.846973503 +0000 UTC m=+19.234856503" watchObservedRunningTime="2025-07-07 06:12:49.847591632 +0000 UTC m=+19.235474632" Jul 7 06:12:49.873671 kubelet[2715]: E0707 06:12:49.873645 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.873671 kubelet[2715]: W0707 06:12:49.873664 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.873797 kubelet[2715]: E0707 06:12:49.873680 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.874015 kubelet[2715]: E0707 06:12:49.873989 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.874015 kubelet[2715]: W0707 06:12:49.874001 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.874015 kubelet[2715]: E0707 06:12:49.874010 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.874181 kubelet[2715]: E0707 06:12:49.874166 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.874181 kubelet[2715]: W0707 06:12:49.874177 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.874273 kubelet[2715]: E0707 06:12:49.874185 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.874349 kubelet[2715]: E0707 06:12:49.874335 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.874349 kubelet[2715]: W0707 06:12:49.874346 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.874448 kubelet[2715]: E0707 06:12:49.874355 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.874516 kubelet[2715]: E0707 06:12:49.874499 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.874516 kubelet[2715]: W0707 06:12:49.874511 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.874614 kubelet[2715]: E0707 06:12:49.874520 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.874679 kubelet[2715]: E0707 06:12:49.874662 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.874679 kubelet[2715]: W0707 06:12:49.874676 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.874727 kubelet[2715]: E0707 06:12:49.874683 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.874919 kubelet[2715]: E0707 06:12:49.874891 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.874919 kubelet[2715]: W0707 06:12:49.874913 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.874919 kubelet[2715]: E0707 06:12:49.874934 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.875151 kubelet[2715]: E0707 06:12:49.875135 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.875151 kubelet[2715]: W0707 06:12:49.875148 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.875151 kubelet[2715]: E0707 06:12:49.875158 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.875351 kubelet[2715]: E0707 06:12:49.875328 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.875351 kubelet[2715]: W0707 06:12:49.875339 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.875351 kubelet[2715]: E0707 06:12:49.875347 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.875763 kubelet[2715]: E0707 06:12:49.875730 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.875763 kubelet[2715]: W0707 06:12:49.875745 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.875763 kubelet[2715]: E0707 06:12:49.875755 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.876443 kubelet[2715]: E0707 06:12:49.875921 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.876443 kubelet[2715]: W0707 06:12:49.875929 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.876443 kubelet[2715]: E0707 06:12:49.875936 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.876443 kubelet[2715]: E0707 06:12:49.876122 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.876443 kubelet[2715]: W0707 06:12:49.876130 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.876443 kubelet[2715]: E0707 06:12:49.876138 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.876443 kubelet[2715]: E0707 06:12:49.876324 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.876443 kubelet[2715]: W0707 06:12:49.876360 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.876443 kubelet[2715]: E0707 06:12:49.876368 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.876621 kubelet[2715]: E0707 06:12:49.876552 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.876621 kubelet[2715]: W0707 06:12:49.876559 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.876621 kubelet[2715]: E0707 06:12:49.876567 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.877121 kubelet[2715]: E0707 06:12:49.876755 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.877121 kubelet[2715]: W0707 06:12:49.876802 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.877121 kubelet[2715]: E0707 06:12:49.876809 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.967687 kubelet[2715]: E0707 06:12:49.967645 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.967687 kubelet[2715]: W0707 06:12:49.967669 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.967687 kubelet[2715]: E0707 06:12:49.967687 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.968147 kubelet[2715]: E0707 06:12:49.968119 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.968147 kubelet[2715]: W0707 06:12:49.968137 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.968464 kubelet[2715]: E0707 06:12:49.968243 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.968559 kubelet[2715]: E0707 06:12:49.968544 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.968559 kubelet[2715]: W0707 06:12:49.968556 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.968640 kubelet[2715]: E0707 06:12:49.968622 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.968965 kubelet[2715]: E0707 06:12:49.968947 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.968965 kubelet[2715]: W0707 06:12:49.968958 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.969042 kubelet[2715]: E0707 06:12:49.968972 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.969374 kubelet[2715]: E0707 06:12:49.969356 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.969374 kubelet[2715]: W0707 06:12:49.969371 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.969447 kubelet[2715]: E0707 06:12:49.969405 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.969697 kubelet[2715]: E0707 06:12:49.969674 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.969697 kubelet[2715]: W0707 06:12:49.969695 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.969763 kubelet[2715]: E0707 06:12:49.969726 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.970250 kubelet[2715]: E0707 06:12:49.970230 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.970250 kubelet[2715]: W0707 06:12:49.970245 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.970430 kubelet[2715]: E0707 06:12:49.970266 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.970537 kubelet[2715]: E0707 06:12:49.970510 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.970537 kubelet[2715]: W0707 06:12:49.970528 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.970671 kubelet[2715]: E0707 06:12:49.970554 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.970866 kubelet[2715]: E0707 06:12:49.970849 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.970866 kubelet[2715]: W0707 06:12:49.970864 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.970928 kubelet[2715]: E0707 06:12:49.970888 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.971250 kubelet[2715]: E0707 06:12:49.971196 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.971250 kubelet[2715]: W0707 06:12:49.971211 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.971250 kubelet[2715]: E0707 06:12:49.971230 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.971541 kubelet[2715]: E0707 06:12:49.971406 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.971541 kubelet[2715]: W0707 06:12:49.971424 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.971541 kubelet[2715]: E0707 06:12:49.971434 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.972044 kubelet[2715]: E0707 06:12:49.972024 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.972044 kubelet[2715]: W0707 06:12:49.972038 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.972164 kubelet[2715]: E0707 06:12:49.972072 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.973042 kubelet[2715]: E0707 06:12:49.973026 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.973042 kubelet[2715]: W0707 06:12:49.973038 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.973123 kubelet[2715]: E0707 06:12:49.973063 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.973295 kubelet[2715]: E0707 06:12:49.973279 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.973295 kubelet[2715]: W0707 06:12:49.973290 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.973391 kubelet[2715]: E0707 06:12:49.973374 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.973974 kubelet[2715]: E0707 06:12:49.973950 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.973974 kubelet[2715]: W0707 06:12:49.973966 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.974045 kubelet[2715]: E0707 06:12:49.973978 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.974151 kubelet[2715]: E0707 06:12:49.974136 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.974151 kubelet[2715]: W0707 06:12:49.974147 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.974151 kubelet[2715]: E0707 06:12:49.974155 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.974458 kubelet[2715]: E0707 06:12:49.974443 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.974458 kubelet[2715]: W0707 06:12:49.974455 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.974503 kubelet[2715]: E0707 06:12:49.974464 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:49.974889 kubelet[2715]: E0707 06:12:49.974861 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:49.974928 kubelet[2715]: W0707 06:12:49.974881 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:49.974928 kubelet[2715]: E0707 06:12:49.974908 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:50.056478 containerd[1540]: time="2025-07-07T06:12:50.053794422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:50.056478 containerd[1540]: time="2025-07-07T06:12:50.055442965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:12:50.056478 containerd[1540]: time="2025-07-07T06:12:50.055581033Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:50.059551 containerd[1540]: time="2025-07-07T06:12:50.059508917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:50.060357 containerd[1540]: time="2025-07-07T06:12:50.060330944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 659.671865ms" Jul 7 06:12:50.060438 containerd[1540]: time="2025-07-07T06:12:50.060422182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:12:50.063118 containerd[1540]: time="2025-07-07T06:12:50.063099388Z" level=info msg="CreateContainer within sandbox \"a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:12:50.069867 containerd[1540]: time="2025-07-07T06:12:50.068447029Z" level=info msg="Container e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:50.080229 containerd[1540]: time="2025-07-07T06:12:50.080191224Z" level=info msg="CreateContainer within sandbox \"a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee\"" Jul 7 06:12:50.080778 containerd[1540]: time="2025-07-07T06:12:50.080738295Z" level=info msg="StartContainer for \"e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee\"" Jul 7 06:12:50.082064 containerd[1540]: time="2025-07-07T06:12:50.082034614Z" level=info msg="connecting to shim e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee" address="unix:///run/containerd/s/6f940f23a57b3c0703bda546f27e6a0d61bbc24ca2790152c2574777c93bfb8e" protocol=ttrpc version=3 Jul 7 06:12:50.108172 systemd[1]: Started cri-containerd-e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee.scope - libcontainer container e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee. Jul 7 06:12:50.151948 containerd[1540]: time="2025-07-07T06:12:50.151914005Z" level=info msg="StartContainer for \"e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee\" returns successfully" Jul 7 06:12:50.168288 systemd[1]: cri-containerd-e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee.scope: Deactivated successfully. Jul 7 06:12:50.172039 containerd[1540]: time="2025-07-07T06:12:50.171877724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee\" id:\"e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee\" pid:3429 exited_at:{seconds:1751868770 nanos:171538329}" Jul 7 06:12:50.172039 containerd[1540]: time="2025-07-07T06:12:50.171965502Z" level=info msg="received exit event container_id:\"e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee\" id:\"e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee\" pid:3429 exited_at:{seconds:1751868770 nanos:171538329}" Jul 7 06:12:50.194306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e096c5ed0cc2a734d707df23b71b26fc06445c9927bd26fab01a05022af69dee-rootfs.mount: Deactivated successfully. Jul 7 06:12:50.841549 kubelet[2715]: I0707 06:12:50.841310 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:12:50.842439 kubelet[2715]: E0707 06:12:50.842410 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:50.843035 containerd[1540]: time="2025-07-07T06:12:50.842914083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:12:51.746550 kubelet[2715]: E0707 06:12:51.746260 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v2bv6" podUID="67bc0055-3690-4794-8a33-7fab9a16fcdf" Jul 7 06:12:52.763334 containerd[1540]: time="2025-07-07T06:12:52.763289069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:52.764125 containerd[1540]: time="2025-07-07T06:12:52.764082448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:12:52.765535 containerd[1540]: time="2025-07-07T06:12:52.764550231Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:52.766098 containerd[1540]: time="2025-07-07T06:12:52.766071999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:52.766687 containerd[1540]: time="2025-07-07T06:12:52.766665420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.923205866s" Jul 7 06:12:52.766777 containerd[1540]: time="2025-07-07T06:12:52.766763569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:12:52.769031 containerd[1540]: time="2025-07-07T06:12:52.768996836Z" level=info msg="CreateContainer within sandbox \"a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:12:52.775192 containerd[1540]: time="2025-07-07T06:12:52.775172856Z" level=info msg="Container b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:52.792000 containerd[1540]: time="2025-07-07T06:12:52.791943362Z" level=info msg="CreateContainer within sandbox \"a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0\"" Jul 7 06:12:52.794237 containerd[1540]: time="2025-07-07T06:12:52.792806909Z" level=info msg="StartContainer for \"b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0\"" Jul 7 06:12:52.796207 containerd[1540]: time="2025-07-07T06:12:52.796099981Z" level=info msg="connecting to shim b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0" address="unix:///run/containerd/s/6f940f23a57b3c0703bda546f27e6a0d61bbc24ca2790152c2574777c93bfb8e" protocol=ttrpc version=3 Jul 7 06:12:52.823959 systemd[1]: Started cri-containerd-b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0.scope - libcontainer container b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0. Jul 7 06:12:52.880475 containerd[1540]: time="2025-07-07T06:12:52.878949333Z" level=info msg="StartContainer for \"b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0\" returns successfully" Jul 7 06:12:53.393855 containerd[1540]: time="2025-07-07T06:12:53.393691707Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:12:53.397685 systemd[1]: cri-containerd-b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0.scope: Deactivated successfully. Jul 7 06:12:53.398040 systemd[1]: cri-containerd-b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0.scope: Consumed 558ms CPU time, 197.8M memory peak, 171.2M written to disk. Jul 7 06:12:53.399364 containerd[1540]: time="2025-07-07T06:12:53.399329570Z" level=info msg="received exit event container_id:\"b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0\" id:\"b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0\" pid:3487 exited_at:{seconds:1751868773 nanos:399092693}" Jul 7 06:12:53.399624 containerd[1540]: time="2025-07-07T06:12:53.399461648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0\" id:\"b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0\" pid:3487 exited_at:{seconds:1751868773 nanos:399092693}" Jul 7 06:12:53.429216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d4f67175d3bf3cd24f6aca26a2335ecb22e12e689850e299a8c3759575b6d0-rootfs.mount: Deactivated successfully. Jul 7 06:12:53.477136 kubelet[2715]: I0707 06:12:53.476852 2715 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:12:53.509137 systemd[1]: Created slice kubepods-burstable-pod50095d4e_5ad9_4407_9d8b_ae5276b53aa1.slice - libcontainer container kubepods-burstable-pod50095d4e_5ad9_4407_9d8b_ae5276b53aa1.slice. Jul 7 06:12:53.517398 systemd[1]: Created slice kubepods-besteffort-podeea1d1cc_704d_4bb8_8684_4c28eeac74a0.slice - libcontainer container kubepods-besteffort-podeea1d1cc_704d_4bb8_8684_4c28eeac74a0.slice. Jul 7 06:12:53.538403 systemd[1]: Created slice kubepods-besteffort-pod94fe1b3f_505a_4c7f_bccc_eff5407fbbb4.slice - libcontainer container kubepods-besteffort-pod94fe1b3f_505a_4c7f_bccc_eff5407fbbb4.slice. Jul 7 06:12:53.554253 systemd[1]: Created slice kubepods-burstable-pod20b9fb2d_4d88_4e4f_9147_8c3eb5c02c41.slice - libcontainer container kubepods-burstable-pod20b9fb2d_4d88_4e4f_9147_8c3eb5c02c41.slice. Jul 7 06:12:53.563869 systemd[1]: Created slice kubepods-besteffort-pod394866ed_bdb7_4703_9a29_b955df5f7d92.slice - libcontainer container kubepods-besteffort-pod394866ed_bdb7_4703_9a29_b955df5f7d92.slice. Jul 7 06:12:53.571614 systemd[1]: Created slice kubepods-besteffort-pod5387ebc0_0c6e_40e8_b8c6_58824e246c67.slice - libcontainer container kubepods-besteffort-pod5387ebc0_0c6e_40e8_b8c6_58824e246c67.slice. Jul 7 06:12:53.582740 systemd[1]: Created slice kubepods-besteffort-pod43ff4723_2d17_47d6_a685_c6e35a5c21ea.slice - libcontainer container kubepods-besteffort-pod43ff4723_2d17_47d6_a685_c6e35a5c21ea.slice. Jul 7 06:12:53.587400 systemd[1]: Created slice kubepods-besteffort-pode24f8ad2_2eec_404f_9326_0a1a6630a383.slice - libcontainer container kubepods-besteffort-pode24f8ad2_2eec_404f_9326_0a1a6630a383.slice. Jul 7 06:12:53.590601 kubelet[2715]: I0707 06:12:53.590560 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-calico-apiserver-certs\") pod \"calico-apiserver-f64c59f69-9hhmk\" (UID: \"eea1d1cc-704d-4bb8-8684-4c28eeac74a0\") " pod="calico-apiserver/calico-apiserver-f64c59f69-9hhmk" Jul 7 06:12:53.590601 kubelet[2715]: I0707 06:12:53.590595 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2nw8\" (UniqueName: \"kubernetes.io/projected/50095d4e-5ad9-4407-9d8b-ae5276b53aa1-kube-api-access-v2nw8\") pod \"coredns-7c65d6cfc9-fqrrz\" (UID: \"50095d4e-5ad9-4407-9d8b-ae5276b53aa1\") " pod="kube-system/coredns-7c65d6cfc9-fqrrz" Jul 7 06:12:53.590690 kubelet[2715]: I0707 06:12:53.590614 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm82j\" (UniqueName: \"kubernetes.io/projected/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-kube-api-access-qm82j\") pod \"calico-apiserver-f64c59f69-9hhmk\" (UID: \"eea1d1cc-704d-4bb8-8684-4c28eeac74a0\") " pod="calico-apiserver/calico-apiserver-f64c59f69-9hhmk" Jul 7 06:12:53.590690 kubelet[2715]: I0707 06:12:53.590630 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50095d4e-5ad9-4407-9d8b-ae5276b53aa1-config-volume\") pod \"coredns-7c65d6cfc9-fqrrz\" (UID: \"50095d4e-5ad9-4407-9d8b-ae5276b53aa1\") " pod="kube-system/coredns-7c65d6cfc9-fqrrz" Jul 7 06:12:53.691250 kubelet[2715]: I0707 06:12:53.691207 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtmk\" (UniqueName: \"kubernetes.io/projected/e24f8ad2-2eec-404f-9326-0a1a6630a383-kube-api-access-6gtmk\") pod \"goldmane-58fd7646b9-glzkq\" (UID: \"e24f8ad2-2eec-404f-9326-0a1a6630a383\") " pod="calico-system/goldmane-58fd7646b9-glzkq" Jul 7 06:12:53.691250 kubelet[2715]: I0707 06:12:53.691250 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43ff4723-2d17-47d6-a685-c6e35a5c21ea-calico-apiserver-certs\") pod \"calico-apiserver-544ddc8dd6-m8cjm\" (UID: \"43ff4723-2d17-47d6-a685-c6e35a5c21ea\") " pod="calico-apiserver/calico-apiserver-544ddc8dd6-m8cjm" Jul 7 06:12:53.691428 kubelet[2715]: I0707 06:12:53.691270 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k4hb\" (UniqueName: \"kubernetes.io/projected/394866ed-bdb7-4703-9a29-b955df5f7d92-kube-api-access-5k4hb\") pod \"whisker-75c946bd69-g9fcp\" (UID: \"394866ed-bdb7-4703-9a29-b955df5f7d92\") " pod="calico-system/whisker-75c946bd69-g9fcp" Jul 7 06:12:53.691428 kubelet[2715]: I0707 06:12:53.691288 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-calico-apiserver-certs\") pod \"calico-apiserver-f64c59f69-rg5ps\" (UID: \"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4\") " pod="calico-apiserver/calico-apiserver-f64c59f69-rg5ps" Jul 7 06:12:53.691428 kubelet[2715]: I0707 06:12:53.691304 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjjc\" (UniqueName: \"kubernetes.io/projected/5387ebc0-0c6e-40e8-b8c6-58824e246c67-kube-api-access-rxjjc\") pod \"calico-kube-controllers-5d96b9cf79-lxh57\" (UID: \"5387ebc0-0c6e-40e8-b8c6-58824e246c67\") " pod="calico-system/calico-kube-controllers-5d96b9cf79-lxh57" Jul 7 06:12:53.691428 kubelet[2715]: I0707 06:12:53.691324 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e24f8ad2-2eec-404f-9326-0a1a6630a383-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-glzkq\" (UID: \"e24f8ad2-2eec-404f-9326-0a1a6630a383\") " pod="calico-system/goldmane-58fd7646b9-glzkq" Jul 7 06:12:53.691428 kubelet[2715]: I0707 06:12:53.691338 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-backend-key-pair\") pod \"whisker-75c946bd69-g9fcp\" (UID: \"394866ed-bdb7-4703-9a29-b955df5f7d92\") " pod="calico-system/whisker-75c946bd69-g9fcp" Jul 7 06:12:53.691539 kubelet[2715]: I0707 06:12:53.691354 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e24f8ad2-2eec-404f-9326-0a1a6630a383-goldmane-key-pair\") pod \"goldmane-58fd7646b9-glzkq\" (UID: \"e24f8ad2-2eec-404f-9326-0a1a6630a383\") " pod="calico-system/goldmane-58fd7646b9-glzkq" Jul 7 06:12:53.691539 kubelet[2715]: I0707 06:12:53.691379 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcn2\" (UniqueName: \"kubernetes.io/projected/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-kube-api-access-lzcn2\") pod \"calico-apiserver-f64c59f69-rg5ps\" (UID: \"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4\") " pod="calico-apiserver/calico-apiserver-f64c59f69-rg5ps" Jul 7 06:12:53.691539 kubelet[2715]: I0707 06:12:53.691404 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e24f8ad2-2eec-404f-9326-0a1a6630a383-config\") pod \"goldmane-58fd7646b9-glzkq\" (UID: \"e24f8ad2-2eec-404f-9326-0a1a6630a383\") " pod="calico-system/goldmane-58fd7646b9-glzkq" Jul 7 06:12:53.691539 kubelet[2715]: I0707 06:12:53.691420 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh97c\" (UniqueName: \"kubernetes.io/projected/43ff4723-2d17-47d6-a685-c6e35a5c21ea-kube-api-access-sh97c\") pod \"calico-apiserver-544ddc8dd6-m8cjm\" (UID: \"43ff4723-2d17-47d6-a685-c6e35a5c21ea\") " pod="calico-apiserver/calico-apiserver-544ddc8dd6-m8cjm" Jul 7 06:12:53.691539 kubelet[2715]: I0707 06:12:53.691436 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-ca-bundle\") pod \"whisker-75c946bd69-g9fcp\" (UID: \"394866ed-bdb7-4703-9a29-b955df5f7d92\") " pod="calico-system/whisker-75c946bd69-g9fcp" Jul 7 06:12:53.691645 kubelet[2715]: I0707 06:12:53.691452 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5387ebc0-0c6e-40e8-b8c6-58824e246c67-tigera-ca-bundle\") pod \"calico-kube-controllers-5d96b9cf79-lxh57\" (UID: \"5387ebc0-0c6e-40e8-b8c6-58824e246c67\") " pod="calico-system/calico-kube-controllers-5d96b9cf79-lxh57" Jul 7 06:12:53.691645 kubelet[2715]: I0707 06:12:53.691466 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnlcp\" (UniqueName: \"kubernetes.io/projected/20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41-kube-api-access-bnlcp\") pod \"coredns-7c65d6cfc9-8djv4\" (UID: \"20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41\") " pod="kube-system/coredns-7c65d6cfc9-8djv4" Jul 7 06:12:53.691645 kubelet[2715]: I0707 06:12:53.691502 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41-config-volume\") pod \"coredns-7c65d6cfc9-8djv4\" (UID: \"20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41\") " pod="kube-system/coredns-7c65d6cfc9-8djv4" Jul 7 06:12:53.752058 systemd[1]: Created slice kubepods-besteffort-pod67bc0055_3690_4794_8a33_7fab9a16fcdf.slice - libcontainer container kubepods-besteffort-pod67bc0055_3690_4794_8a33_7fab9a16fcdf.slice. Jul 7 06:12:53.755265 containerd[1540]: time="2025-07-07T06:12:53.755118897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2bv6,Uid:67bc0055-3690-4794-8a33-7fab9a16fcdf,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:53.820867 kubelet[2715]: E0707 06:12:53.819512 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:53.823693 containerd[1540]: time="2025-07-07T06:12:53.823651721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqrrz,Uid:50095d4e-5ad9-4407-9d8b-ae5276b53aa1,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:53.826722 containerd[1540]: time="2025-07-07T06:12:53.826667470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-9hhmk,Uid:eea1d1cc-704d-4bb8-8684-4c28eeac74a0,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:12:53.862358 kubelet[2715]: E0707 06:12:53.862165 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:12:53.866145 containerd[1540]: time="2025-07-07T06:12:53.864005279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8djv4,Uid:20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:53.868858 containerd[1540]: time="2025-07-07T06:12:53.868813844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c946bd69-g9fcp,Uid:394866ed-bdb7-4703-9a29-b955df5f7d92,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:53.884195 containerd[1540]: time="2025-07-07T06:12:53.884167064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d96b9cf79-lxh57,Uid:5387ebc0-0c6e-40e8-b8c6-58824e246c67,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:53.889190 containerd[1540]: time="2025-07-07T06:12:53.886011149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:12:53.894557 containerd[1540]: time="2025-07-07T06:12:53.890766863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-544ddc8dd6-m8cjm,Uid:43ff4723-2d17-47d6-a685-c6e35a5c21ea,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:12:53.897822 containerd[1540]: time="2025-07-07T06:12:53.895623327Z" level=error msg="Failed to destroy network for sandbox \"f2f26a99675757628a4c86656eaf555fe92ec5a3ecd030d45d62e0f98934129d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:53.898044 containerd[1540]: time="2025-07-07T06:12:53.892619278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-glzkq,Uid:e24f8ad2-2eec-404f-9326-0a1a6630a383,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:53.919726 containerd[1540]: time="2025-07-07T06:12:53.919324773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2bv6,Uid:67bc0055-3690-4794-8a33-7fab9a16fcdf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f26a99675757628a4c86656eaf555fe92ec5a3ecd030d45d62e0f98934129d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:53.922428 kubelet[2715]: E0707 06:12:53.922381 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f26a99675757628a4c86656eaf555fe92ec5a3ecd030d45d62e0f98934129d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:53.922558 kubelet[2715]: E0707 06:12:53.922541 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f26a99675757628a4c86656eaf555fe92ec5a3ecd030d45d62e0f98934129d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v2bv6" Jul 7 06:12:53.922657 kubelet[2715]: E0707 06:12:53.922630 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2f26a99675757628a4c86656eaf555fe92ec5a3ecd030d45d62e0f98934129d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v2bv6" Jul 7 06:12:53.923057 kubelet[2715]: E0707 06:12:53.923027 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v2bv6_calico-system(67bc0055-3690-4794-8a33-7fab9a16fcdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v2bv6_calico-system(67bc0055-3690-4794-8a33-7fab9a16fcdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2f26a99675757628a4c86656eaf555fe92ec5a3ecd030d45d62e0f98934129d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v2bv6" podUID="67bc0055-3690-4794-8a33-7fab9a16fcdf" Jul 7 06:12:54.008765 containerd[1540]: time="2025-07-07T06:12:54.008243244Z" level=error msg="Failed to destroy network for sandbox \"3d4e52d34e10be78ebadb618c8a8a9370424e948b3e94b5dd418ef2ce722b1fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.010462 containerd[1540]: time="2025-07-07T06:12:54.010391887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-9hhmk,Uid:eea1d1cc-704d-4bb8-8684-4c28eeac74a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4e52d34e10be78ebadb618c8a8a9370424e948b3e94b5dd418ef2ce722b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.011789 kubelet[2715]: E0707 06:12:54.011153 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4e52d34e10be78ebadb618c8a8a9370424e948b3e94b5dd418ef2ce722b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.011789 kubelet[2715]: E0707 06:12:54.011213 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4e52d34e10be78ebadb618c8a8a9370424e948b3e94b5dd418ef2ce722b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f64c59f69-9hhmk" Jul 7 06:12:54.011789 kubelet[2715]: E0707 06:12:54.011252 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4e52d34e10be78ebadb618c8a8a9370424e948b3e94b5dd418ef2ce722b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f64c59f69-9hhmk" Jul 7 06:12:54.012064 kubelet[2715]: E0707 06:12:54.011316 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f64c59f69-9hhmk_calico-apiserver(eea1d1cc-704d-4bb8-8684-4c28eeac74a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f64c59f69-9hhmk_calico-apiserver(eea1d1cc-704d-4bb8-8684-4c28eeac74a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d4e52d34e10be78ebadb618c8a8a9370424e948b3e94b5dd418ef2ce722b1fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f64c59f69-9hhmk" podUID="eea1d1cc-704d-4bb8-8684-4c28eeac74a0" Jul 7 06:12:54.092337 containerd[1540]: time="2025-07-07T06:12:54.092285537Z" level=error msg="Failed to destroy network for sandbox \"8e51b1720ad52344322c2a10c2d141b45294cdc57a17e5c397b21f13e691bc53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.097132 containerd[1540]: time="2025-07-07T06:12:54.095264899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqrrz,Uid:50095d4e-5ad9-4407-9d8b-ae5276b53aa1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e51b1720ad52344322c2a10c2d141b45294cdc57a17e5c397b21f13e691bc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.097242 kubelet[2715]: E0707 06:12:54.095502 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e51b1720ad52344322c2a10c2d141b45294cdc57a17e5c397b21f13e691bc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.097242 kubelet[2715]: E0707 06:12:54.095566 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e51b1720ad52344322c2a10c2d141b45294cdc57a17e5c397b21f13e691bc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fqrrz" Jul 7 06:12:54.097242 kubelet[2715]: E0707 06:12:54.095583 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e51b1720ad52344322c2a10c2d141b45294cdc57a17e5c397b21f13e691bc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fqrrz" Jul 7 06:12:54.097313 kubelet[2715]: E0707 06:12:54.095644 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fqrrz_kube-system(50095d4e-5ad9-4407-9d8b-ae5276b53aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fqrrz_kube-system(50095d4e-5ad9-4407-9d8b-ae5276b53aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e51b1720ad52344322c2a10c2d141b45294cdc57a17e5c397b21f13e691bc53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fqrrz" podUID="50095d4e-5ad9-4407-9d8b-ae5276b53aa1" Jul 7 06:12:54.102241 containerd[1540]: time="2025-07-07T06:12:54.102157141Z" level=error msg="Failed to destroy network for sandbox \"c95911a2815a4bb011dd30abaf073cab737d92afd02c94799c63c652d335bedf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.104181 containerd[1540]: time="2025-07-07T06:12:54.103322706Z" level=error msg="Failed to destroy network for sandbox \"e78a9cd4ab137a58363a0c069c8a0949c4fa321fb7c5d382064160d27326b55f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.104685 containerd[1540]: time="2025-07-07T06:12:54.103355566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-544ddc8dd6-m8cjm,Uid:43ff4723-2d17-47d6-a685-c6e35a5c21ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c95911a2815a4bb011dd30abaf073cab737d92afd02c94799c63c652d335bedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.104920 kubelet[2715]: E0707 06:12:54.104767 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c95911a2815a4bb011dd30abaf073cab737d92afd02c94799c63c652d335bedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.105536 kubelet[2715]: E0707 06:12:54.104937 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c95911a2815a4bb011dd30abaf073cab737d92afd02c94799c63c652d335bedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-544ddc8dd6-m8cjm" Jul 7 06:12:54.105536 kubelet[2715]: E0707 06:12:54.105004 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c95911a2815a4bb011dd30abaf073cab737d92afd02c94799c63c652d335bedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-544ddc8dd6-m8cjm" Jul 7 06:12:54.105536 kubelet[2715]: E0707 06:12:54.105080 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-544ddc8dd6-m8cjm_calico-apiserver(43ff4723-2d17-47d6-a685-c6e35a5c21ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-544ddc8dd6-m8cjm_calico-apiserver(43ff4723-2d17-47d6-a685-c6e35a5c21ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c95911a2815a4bb011dd30abaf073cab737d92afd02c94799c63c652d335bedf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-544ddc8dd6-m8cjm" podUID="43ff4723-2d17-47d6-a685-c6e35a5c21ea" Jul 7 06:12:54.108510 containerd[1540]: time="2025-07-07T06:12:54.107515882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-glzkq,Uid:e24f8ad2-2eec-404f-9326-0a1a6630a383,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78a9cd4ab137a58363a0c069c8a0949c4fa321fb7c5d382064160d27326b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.108629 kubelet[2715]: E0707 06:12:54.107760 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78a9cd4ab137a58363a0c069c8a0949c4fa321fb7c5d382064160d27326b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.108629 kubelet[2715]: E0707 06:12:54.108328 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78a9cd4ab137a58363a0c069c8a0949c4fa321fb7c5d382064160d27326b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-glzkq" Jul 7 06:12:54.108629 kubelet[2715]: E0707 06:12:54.108346 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e78a9cd4ab137a58363a0c069c8a0949c4fa321fb7c5d382064160d27326b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-glzkq" Jul 7 06:12:54.108730 kubelet[2715]: E0707 06:12:54.108441 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-glzkq_calico-system(e24f8ad2-2eec-404f-9326-0a1a6630a383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-glzkq_calico-system(e24f8ad2-2eec-404f-9326-0a1a6630a383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e78a9cd4ab137a58363a0c069c8a0949c4fa321fb7c5d382064160d27326b55f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-glzkq" podUID="e24f8ad2-2eec-404f-9326-0a1a6630a383" Jul 7 06:12:54.115034 containerd[1540]: time="2025-07-07T06:12:54.114905418Z" level=error msg="Failed to destroy network for sandbox \"8988e4ed70ea3da0cc6bdc9c567758c13fbe3a9d1b32ecef276fb6764a906ddd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.116113 containerd[1540]: time="2025-07-07T06:12:54.116071313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c946bd69-g9fcp,Uid:394866ed-bdb7-4703-9a29-b955df5f7d92,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988e4ed70ea3da0cc6bdc9c567758c13fbe3a9d1b32ecef276fb6764a906ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.116554 kubelet[2715]: E0707 06:12:54.116243 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988e4ed70ea3da0cc6bdc9c567758c13fbe3a9d1b32ecef276fb6764a906ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.116591 kubelet[2715]: E0707 06:12:54.116558 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988e4ed70ea3da0cc6bdc9c567758c13fbe3a9d1b32ecef276fb6764a906ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c946bd69-g9fcp" Jul 7 06:12:54.116591 kubelet[2715]: E0707 06:12:54.116577 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988e4ed70ea3da0cc6bdc9c567758c13fbe3a9d1b32ecef276fb6764a906ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c946bd69-g9fcp" Jul 7 06:12:54.116655 kubelet[2715]: E0707 06:12:54.116634 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75c946bd69-g9fcp_calico-system(394866ed-bdb7-4703-9a29-b955df5f7d92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75c946bd69-g9fcp_calico-system(394866ed-bdb7-4703-9a29-b955df5f7d92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8988e4ed70ea3da0cc6bdc9c567758c13fbe3a9d1b32ecef276fb6764a906ddd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75c946bd69-g9fcp" podUID="394866ed-bdb7-4703-9a29-b955df5f7d92" Jul 7 06:12:54.122531 containerd[1540]: time="2025-07-07T06:12:54.122498880Z" level=error msg="Failed to destroy network for sandbox \"4b34ceeba4896b51c46f134935ab4842025ecf3dac93ac4033ee494ff963b62c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.122702 containerd[1540]: time="2025-07-07T06:12:54.122671168Z" level=error msg="Failed to destroy network for sandbox \"d3c36a0497925b64c864da02f675bf55457dc7ae6fb277e8135ebdf915501dcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.123344 containerd[1540]: time="2025-07-07T06:12:54.123293630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d96b9cf79-lxh57,Uid:5387ebc0-0c6e-40e8-b8c6-58824e246c67,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b34ceeba4896b51c46f134935ab4842025ecf3dac93ac4033ee494ff963b62c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.123617 kubelet[2715]: E0707 06:12:54.123576 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b34ceeba4896b51c46f134935ab4842025ecf3dac93ac4033ee494ff963b62c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.123678 kubelet[2715]: E0707 06:12:54.123617 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b34ceeba4896b51c46f134935ab4842025ecf3dac93ac4033ee494ff963b62c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d96b9cf79-lxh57" Jul 7 06:12:54.123678 kubelet[2715]: E0707 06:12:54.123634 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b34ceeba4896b51c46f134935ab4842025ecf3dac93ac4033ee494ff963b62c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d96b9cf79-lxh57" Jul 7 06:12:54.123678 kubelet[2715]: E0707 06:12:54.123660 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d96b9cf79-lxh57_calico-system(5387ebc0-0c6e-40e8-b8c6-58824e246c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d96b9cf79-lxh57_calico-system(5387ebc0-0c6e-40e8-b8c6-58824e246c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b34ceeba4896b51c46f134935ab4842025ecf3dac93ac4033ee494ff963b62c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d96b9cf79-lxh57" podUID="5387ebc0-0c6e-40e8-b8c6-58824e246c67" Jul 7 06:12:54.125184 containerd[1540]: time="2025-07-07T06:12:54.125128237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8djv4,Uid:20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c36a0497925b64c864da02f675bf55457dc7ae6fb277e8135ebdf915501dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.125705 kubelet[2715]: E0707 06:12:54.125325 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c36a0497925b64c864da02f675bf55457dc7ae6fb277e8135ebdf915501dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.125705 kubelet[2715]: E0707 06:12:54.125373 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c36a0497925b64c864da02f675bf55457dc7ae6fb277e8135ebdf915501dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8djv4" Jul 7 06:12:54.125705 kubelet[2715]: E0707 06:12:54.125390 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c36a0497925b64c864da02f675bf55457dc7ae6fb277e8135ebdf915501dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8djv4" Jul 7 06:12:54.125948 kubelet[2715]: E0707 06:12:54.125738 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8djv4_kube-system(20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8djv4_kube-system(20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3c36a0497925b64c864da02f675bf55457dc7ae6fb277e8135ebdf915501dcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8djv4" podUID="20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41" Jul 7 06:12:54.150936 containerd[1540]: time="2025-07-07T06:12:54.150890587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-rg5ps,Uid:94fe1b3f-505a-4c7f-bccc-eff5407fbbb4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:12:54.210898 containerd[1540]: time="2025-07-07T06:12:54.210772099Z" level=error msg="Failed to destroy network for sandbox \"73edec9fe934fde51a0f6f461d22ad2ef2f0ccedd60d546dfb141d373fdbd5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.213183 containerd[1540]: time="2025-07-07T06:12:54.213096190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-rg5ps,Uid:94fe1b3f-505a-4c7f-bccc-eff5407fbbb4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73edec9fe934fde51a0f6f461d22ad2ef2f0ccedd60d546dfb141d373fdbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.213566 kubelet[2715]: E0707 06:12:54.213487 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73edec9fe934fde51a0f6f461d22ad2ef2f0ccedd60d546dfb141d373fdbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:54.213736 kubelet[2715]: E0707 06:12:54.213644 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73edec9fe934fde51a0f6f461d22ad2ef2f0ccedd60d546dfb141d373fdbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f64c59f69-rg5ps" Jul 7 06:12:54.213736 kubelet[2715]: E0707 06:12:54.213664 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73edec9fe934fde51a0f6f461d22ad2ef2f0ccedd60d546dfb141d373fdbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f64c59f69-rg5ps" Jul 7 06:12:54.213989 kubelet[2715]: E0707 06:12:54.213920 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f64c59f69-rg5ps_calico-apiserver(94fe1b3f-505a-4c7f-bccc-eff5407fbbb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f64c59f69-rg5ps_calico-apiserver(94fe1b3f-505a-4c7f-bccc-eff5407fbbb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73edec9fe934fde51a0f6f461d22ad2ef2f0ccedd60d546dfb141d373fdbd5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f64c59f69-rg5ps" podUID="94fe1b3f-505a-4c7f-bccc-eff5407fbbb4" Jul 7 06:12:54.789632 systemd[1]: run-netns-cni\x2d70bab44e\x2dc62d\x2d27d2\x2dfe1c\x2df56b08978f30.mount: Deactivated successfully. Jul 7 06:12:54.790953 systemd[1]: run-netns-cni\x2d28a11bf2\x2df9a8\x2d8a25\x2d9943\x2d89a12c7f0750.mount: Deactivated successfully. Jul 7 06:12:54.791171 systemd[1]: run-netns-cni\x2d33536278\x2d317b\x2d6601\x2d94c1\x2d21f4988820b5.mount: Deactivated successfully. Jul 7 06:12:57.441758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594611039.mount: Deactivated successfully. Jul 7 06:12:57.473220 containerd[1540]: time="2025-07-07T06:12:57.473182147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:57.474116 containerd[1540]: time="2025-07-07T06:12:57.474083417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:12:57.474881 containerd[1540]: time="2025-07-07T06:12:57.474797780Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:57.477315 containerd[1540]: time="2025-07-07T06:12:57.477258844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:57.477782 containerd[1540]: time="2025-07-07T06:12:57.477759328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.583412344s" Jul 7 06:12:57.477879 containerd[1540]: time="2025-07-07T06:12:57.477863717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:12:57.495852 containerd[1540]: time="2025-07-07T06:12:57.495131085Z" level=info msg="CreateContainer within sandbox \"a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:12:57.504883 containerd[1540]: time="2025-07-07T06:12:57.502753015Z" level=info msg="Container a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:57.518576 containerd[1540]: time="2025-07-07T06:12:57.518531808Z" level=info msg="CreateContainer within sandbox \"a2d7fa44c51b609b1eb9be7c32d8d1c2fcdcaa43552139d9fce0023ad369c78e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\"" Jul 7 06:12:57.520566 containerd[1540]: time="2025-07-07T06:12:57.519730675Z" level=info msg="StartContainer for \"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\"" Jul 7 06:12:57.521454 containerd[1540]: time="2025-07-07T06:12:57.521350428Z" level=info msg="connecting to shim a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44" address="unix:///run/containerd/s/6f940f23a57b3c0703bda546f27e6a0d61bbc24ca2790152c2574777c93bfb8e" protocol=ttrpc version=3 Jul 7 06:12:57.574958 systemd[1]: Started cri-containerd-a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44.scope - libcontainer container a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44. Jul 7 06:12:57.637814 containerd[1540]: time="2025-07-07T06:12:57.637772909Z" level=info msg="StartContainer for \"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" returns successfully" Jul 7 06:12:57.728937 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:12:57.729032 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:12:57.919045 kubelet[2715]: I0707 06:12:57.918941 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-ca-bundle\") pod \"394866ed-bdb7-4703-9a29-b955df5f7d92\" (UID: \"394866ed-bdb7-4703-9a29-b955df5f7d92\") " Jul 7 06:12:57.919045 kubelet[2715]: I0707 06:12:57.918984 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-backend-key-pair\") pod \"394866ed-bdb7-4703-9a29-b955df5f7d92\" (UID: \"394866ed-bdb7-4703-9a29-b955df5f7d92\") " Jul 7 06:12:57.919045 kubelet[2715]: I0707 06:12:57.919013 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k4hb\" (UniqueName: \"kubernetes.io/projected/394866ed-bdb7-4703-9a29-b955df5f7d92-kube-api-access-5k4hb\") pod \"394866ed-bdb7-4703-9a29-b955df5f7d92\" (UID: \"394866ed-bdb7-4703-9a29-b955df5f7d92\") " Jul 7 06:12:57.922070 kubelet[2715]: I0707 06:12:57.922027 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5z5ft" podStartSLOduration=1.823536775 podStartE2EDuration="10.9219138s" podCreationTimestamp="2025-07-07 06:12:47 +0000 UTC" firstStartedPulling="2025-07-07 06:12:48.380302624 +0000 UTC m=+17.768185624" lastFinishedPulling="2025-07-07 06:12:57.478679649 +0000 UTC m=+26.866562649" observedRunningTime="2025-07-07 06:12:57.918621354 +0000 UTC m=+27.306504354" watchObservedRunningTime="2025-07-07 06:12:57.9219138 +0000 UTC m=+27.309796810" Jul 7 06:12:57.923465 kubelet[2715]: I0707 06:12:57.923412 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "394866ed-bdb7-4703-9a29-b955df5f7d92" (UID: "394866ed-bdb7-4703-9a29-b955df5f7d92"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:12:57.927555 kubelet[2715]: I0707 06:12:57.927503 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/394866ed-bdb7-4703-9a29-b955df5f7d92-kube-api-access-5k4hb" (OuterVolumeSpecName: "kube-api-access-5k4hb") pod "394866ed-bdb7-4703-9a29-b955df5f7d92" (UID: "394866ed-bdb7-4703-9a29-b955df5f7d92"). InnerVolumeSpecName "kube-api-access-5k4hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:12:57.934112 kubelet[2715]: I0707 06:12:57.934039 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "394866ed-bdb7-4703-9a29-b955df5f7d92" (UID: "394866ed-bdb7-4703-9a29-b955df5f7d92"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:12:58.019857 kubelet[2715]: I0707 06:12:58.019530 2715 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-ca-bundle\") on node \"172-234-200-33\" DevicePath \"\"" Jul 7 06:12:58.019857 kubelet[2715]: I0707 06:12:58.019562 2715 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/394866ed-bdb7-4703-9a29-b955df5f7d92-whisker-backend-key-pair\") on node \"172-234-200-33\" DevicePath \"\"" Jul 7 06:12:58.019857 kubelet[2715]: I0707 06:12:58.019572 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k4hb\" (UniqueName: \"kubernetes.io/projected/394866ed-bdb7-4703-9a29-b955df5f7d92-kube-api-access-5k4hb\") on node \"172-234-200-33\" DevicePath \"\"" Jul 7 06:12:58.072011 containerd[1540]: time="2025-07-07T06:12:58.071954682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" id:\"bbe45fef21f3d80d3cb824224adc9a53c884a22ca614c532d8283456f1c58454\" pid:3843 exit_status:1 exited_at:{seconds:1751868778 nanos:71638365}" Jul 7 06:12:58.209439 systemd[1]: Removed slice kubepods-besteffort-pod394866ed_bdb7_4703_9a29_b955df5f7d92.slice - libcontainer container kubepods-besteffort-pod394866ed_bdb7_4703_9a29_b955df5f7d92.slice. Jul 7 06:12:58.281978 systemd[1]: Created slice kubepods-besteffort-pod384d71dd_71d5_41f3_b5a0_d4b04d5815cd.slice - libcontainer container kubepods-besteffort-pod384d71dd_71d5_41f3_b5a0_d4b04d5815cd.slice. Jul 7 06:12:58.423081 kubelet[2715]: I0707 06:12:58.422973 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384d71dd-71d5-41f3-b5a0-d4b04d5815cd-whisker-ca-bundle\") pod \"whisker-68dc575b59-f8c7j\" (UID: \"384d71dd-71d5-41f3-b5a0-d4b04d5815cd\") " pod="calico-system/whisker-68dc575b59-f8c7j" Jul 7 06:12:58.423081 kubelet[2715]: I0707 06:12:58.423020 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thl8v\" (UniqueName: \"kubernetes.io/projected/384d71dd-71d5-41f3-b5a0-d4b04d5815cd-kube-api-access-thl8v\") pod \"whisker-68dc575b59-f8c7j\" (UID: \"384d71dd-71d5-41f3-b5a0-d4b04d5815cd\") " pod="calico-system/whisker-68dc575b59-f8c7j" Jul 7 06:12:58.423081 kubelet[2715]: I0707 06:12:58.423041 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/384d71dd-71d5-41f3-b5a0-d4b04d5815cd-whisker-backend-key-pair\") pod \"whisker-68dc575b59-f8c7j\" (UID: \"384d71dd-71d5-41f3-b5a0-d4b04d5815cd\") " pod="calico-system/whisker-68dc575b59-f8c7j" Jul 7 06:12:58.441681 systemd[1]: var-lib-kubelet-pods-394866ed\x2dbdb7\x2d4703\x2d9a29\x2db955df5f7d92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5k4hb.mount: Deactivated successfully. Jul 7 06:12:58.441789 systemd[1]: var-lib-kubelet-pods-394866ed\x2dbdb7\x2d4703\x2d9a29\x2db955df5f7d92-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:12:58.588909 containerd[1540]: time="2025-07-07T06:12:58.588797546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68dc575b59-f8c7j,Uid:384d71dd-71d5-41f3-b5a0-d4b04d5815cd,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:58.717703 systemd-networkd[1466]: cali019cb1509a4: Link UP Jul 7 06:12:58.718372 systemd-networkd[1466]: cali019cb1509a4: Gained carrier Jul 7 06:12:58.733313 containerd[1540]: 2025-07-07 06:12:58.614 [INFO][3870] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:12:58.733313 containerd[1540]: 2025-07-07 06:12:58.646 [INFO][3870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0 whisker-68dc575b59- calico-system 384d71dd-71d5-41f3-b5a0-d4b04d5815cd 935 0 2025-07-07 06:12:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68dc575b59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-200-33 whisker-68dc575b59-f8c7j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali019cb1509a4 [] [] }} ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-" Jul 7 06:12:58.733313 containerd[1540]: 2025-07-07 06:12:58.647 [INFO][3870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" Jul 7 06:12:58.733313 containerd[1540]: 2025-07-07 06:12:58.673 [INFO][3882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" HandleID="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Workload="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.673 [INFO][3882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" HandleID="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Workload="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5130), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-33", "pod":"whisker-68dc575b59-f8c7j", "timestamp":"2025-07-07 06:12:58.673161421 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.673 [INFO][3882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.673 [INFO][3882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.673 [INFO][3882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.680 [INFO][3882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" host="172-234-200-33" Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.684 [INFO][3882] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.689 [INFO][3882] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.691 [INFO][3882] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.692 [INFO][3882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:12:58.733498 containerd[1540]: 2025-07-07 06:12:58.692 [INFO][3882] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" host="172-234-200-33" Jul 7 06:12:58.733707 containerd[1540]: 2025-07-07 06:12:58.694 [INFO][3882] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454 Jul 7 06:12:58.733707 containerd[1540]: 2025-07-07 06:12:58.700 [INFO][3882] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" host="172-234-200-33" Jul 7 06:12:58.733707 containerd[1540]: 2025-07-07 06:12:58.705 [INFO][3882] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.65/26] block=192.168.82.64/26 handle="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" host="172-234-200-33" Jul 7 06:12:58.733707 containerd[1540]: 2025-07-07 06:12:58.705 [INFO][3882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.65/26] handle="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" host="172-234-200-33" Jul 7 06:12:58.733707 containerd[1540]: 2025-07-07 06:12:58.705 [INFO][3882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:58.733707 containerd[1540]: 2025-07-07 06:12:58.705 [INFO][3882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.65/26] IPv6=[] ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" HandleID="k8s-pod-network.ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Workload="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" Jul 7 06:12:58.733940 containerd[1540]: 2025-07-07 06:12:58.708 [INFO][3870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0", GenerateName:"whisker-68dc575b59-", Namespace:"calico-system", SelfLink:"", UID:"384d71dd-71d5-41f3-b5a0-d4b04d5815cd", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68dc575b59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"whisker-68dc575b59-f8c7j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali019cb1509a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:58.733940 containerd[1540]: 2025-07-07 06:12:58.708 [INFO][3870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.65/32] ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" Jul 7 06:12:58.734073 containerd[1540]: 2025-07-07 06:12:58.708 [INFO][3870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali019cb1509a4 ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" Jul 7 06:12:58.734073 containerd[1540]: 2025-07-07 06:12:58.719 [INFO][3870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" Jul 7 06:12:58.734142 containerd[1540]: 2025-07-07 06:12:58.720 [INFO][3870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0", GenerateName:"whisker-68dc575b59-", Namespace:"calico-system", SelfLink:"", UID:"384d71dd-71d5-41f3-b5a0-d4b04d5815cd", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68dc575b59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454", Pod:"whisker-68dc575b59-f8c7j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali019cb1509a4", MAC:"1e:a1:53:2f:bf:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:58.734214 containerd[1540]: 2025-07-07 06:12:58.730 [INFO][3870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" Namespace="calico-system" Pod="whisker-68dc575b59-f8c7j" WorkloadEndpoint="172--234--200--33-k8s-whisker--68dc575b59--f8c7j-eth0" Jul 7 06:12:58.750753 kubelet[2715]: I0707 06:12:58.750571 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="394866ed-bdb7-4703-9a29-b955df5f7d92" path="/var/lib/kubelet/pods/394866ed-bdb7-4703-9a29-b955df5f7d92/volumes" Jul 7 06:12:58.764017 containerd[1540]: time="2025-07-07T06:12:58.763990723Z" level=info msg="connecting to shim ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454" address="unix:///run/containerd/s/c424811c3cf5686d07fb57c5c2db8a08a8757002aac8fc3211b40a249a4d3042" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:58.791945 systemd[1]: Started cri-containerd-ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454.scope - libcontainer container ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454. Jul 7 06:12:58.840081 containerd[1540]: time="2025-07-07T06:12:58.839819812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68dc575b59-f8c7j,Uid:384d71dd-71d5-41f3-b5a0-d4b04d5815cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454\"" Jul 7 06:12:58.842541 containerd[1540]: time="2025-07-07T06:12:58.842472606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:12:58.968549 containerd[1540]: time="2025-07-07T06:12:58.968510458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" id:\"95d548b4334edd348cd80c274c3cd94801390ac72bea2138debd9462d1033ebf\" pid:3954 exit_status:1 exited_at:{seconds:1751868778 nanos:968279741}" Jul 7 06:13:00.157958 systemd-networkd[1466]: cali019cb1509a4: Gained IPv6LL Jul 7 06:13:00.198184 containerd[1540]: time="2025-07-07T06:13:00.198135883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:00.199083 containerd[1540]: time="2025-07-07T06:13:00.198951296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:13:00.199546 containerd[1540]: time="2025-07-07T06:13:00.199511611Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:00.201335 containerd[1540]: time="2025-07-07T06:13:00.201295366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:00.203218 containerd[1540]: time="2025-07-07T06:13:00.203189989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.360695193s" Jul 7 06:13:00.203308 containerd[1540]: time="2025-07-07T06:13:00.203293118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:13:00.211860 containerd[1540]: time="2025-07-07T06:13:00.209957350Z" level=info msg="CreateContainer within sandbox \"ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:13:00.218107 containerd[1540]: time="2025-07-07T06:13:00.218084740Z" level=info msg="Container 3080931fc1eb5b426c8273bed103c377e6d57edd1ccbde69b705d1fe6d96e892: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:00.221222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058009521.mount: Deactivated successfully. Jul 7 06:13:00.232370 containerd[1540]: time="2025-07-07T06:13:00.232291566Z" level=info msg="CreateContainer within sandbox \"ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3080931fc1eb5b426c8273bed103c377e6d57edd1ccbde69b705d1fe6d96e892\"" Jul 7 06:13:00.232905 containerd[1540]: time="2025-07-07T06:13:00.232801532Z" level=info msg="StartContainer for \"3080931fc1eb5b426c8273bed103c377e6d57edd1ccbde69b705d1fe6d96e892\"" Jul 7 06:13:00.234354 containerd[1540]: time="2025-07-07T06:13:00.234330418Z" level=info msg="connecting to shim 3080931fc1eb5b426c8273bed103c377e6d57edd1ccbde69b705d1fe6d96e892" address="unix:///run/containerd/s/c424811c3cf5686d07fb57c5c2db8a08a8757002aac8fc3211b40a249a4d3042" protocol=ttrpc version=3 Jul 7 06:13:00.255947 systemd[1]: Started cri-containerd-3080931fc1eb5b426c8273bed103c377e6d57edd1ccbde69b705d1fe6d96e892.scope - libcontainer container 3080931fc1eb5b426c8273bed103c377e6d57edd1ccbde69b705d1fe6d96e892. Jul 7 06:13:00.304961 containerd[1540]: time="2025-07-07T06:13:00.304931324Z" level=info msg="StartContainer for \"3080931fc1eb5b426c8273bed103c377e6d57edd1ccbde69b705d1fe6d96e892\" returns successfully" Jul 7 06:13:00.307153 containerd[1540]: time="2025-07-07T06:13:00.307040466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:13:02.021120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808757925.mount: Deactivated successfully. Jul 7 06:13:02.029670 containerd[1540]: time="2025-07-07T06:13:02.029636005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:02.030418 containerd[1540]: time="2025-07-07T06:13:02.030068722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:13:02.030940 containerd[1540]: time="2025-07-07T06:13:02.030919635Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:02.032364 containerd[1540]: time="2025-07-07T06:13:02.032340034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:02.032951 containerd[1540]: time="2025-07-07T06:13:02.032917640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.725587947s" Jul 7 06:13:02.032997 containerd[1540]: time="2025-07-07T06:13:02.032951780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:13:02.035798 containerd[1540]: time="2025-07-07T06:13:02.035766598Z" level=info msg="CreateContainer within sandbox \"ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:13:02.041452 containerd[1540]: time="2025-07-07T06:13:02.041421405Z" level=info msg="Container 163a63bb16309d1acbcc9a531c7e8f11b237e24db3f5ebcb2b74b127c8dcb692: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:02.048133 containerd[1540]: time="2025-07-07T06:13:02.048090634Z" level=info msg="CreateContainer within sandbox \"ffc466823f2b081a150bc1dc7ddf148ff94fb1b9f0e30ecd3039abe025a7d454\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"163a63bb16309d1acbcc9a531c7e8f11b237e24db3f5ebcb2b74b127c8dcb692\"" Jul 7 06:13:02.048635 containerd[1540]: time="2025-07-07T06:13:02.048587500Z" level=info msg="StartContainer for \"163a63bb16309d1acbcc9a531c7e8f11b237e24db3f5ebcb2b74b127c8dcb692\"" Jul 7 06:13:02.049676 containerd[1540]: time="2025-07-07T06:13:02.049656972Z" level=info msg="connecting to shim 163a63bb16309d1acbcc9a531c7e8f11b237e24db3f5ebcb2b74b127c8dcb692" address="unix:///run/containerd/s/c424811c3cf5686d07fb57c5c2db8a08a8757002aac8fc3211b40a249a4d3042" protocol=ttrpc version=3 Jul 7 06:13:02.071973 systemd[1]: Started cri-containerd-163a63bb16309d1acbcc9a531c7e8f11b237e24db3f5ebcb2b74b127c8dcb692.scope - libcontainer container 163a63bb16309d1acbcc9a531c7e8f11b237e24db3f5ebcb2b74b127c8dcb692. Jul 7 06:13:02.135950 containerd[1540]: time="2025-07-07T06:13:02.135764704Z" level=info msg="StartContainer for \"163a63bb16309d1acbcc9a531c7e8f11b237e24db3f5ebcb2b74b127c8dcb692\" returns successfully" Jul 7 06:13:02.932016 kubelet[2715]: I0707 06:13:02.931896 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68dc575b59-f8c7j" podStartSLOduration=1.740003894 podStartE2EDuration="4.931876436s" podCreationTimestamp="2025-07-07 06:12:58 +0000 UTC" firstStartedPulling="2025-07-07 06:12:58.84207998 +0000 UTC m=+28.229962980" lastFinishedPulling="2025-07-07 06:13:02.033952522 +0000 UTC m=+31.421835522" observedRunningTime="2025-07-07 06:13:02.929987401 +0000 UTC m=+32.317870401" watchObservedRunningTime="2025-07-07 06:13:02.931876436 +0000 UTC m=+32.319759466" Jul 7 06:13:04.747288 containerd[1540]: time="2025-07-07T06:13:04.747242136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d96b9cf79-lxh57,Uid:5387ebc0-0c6e-40e8-b8c6-58824e246c67,Namespace:calico-system,Attempt:0,}" Jul 7 06:13:04.852235 systemd-networkd[1466]: calif03b6c04048: Link UP Jul 7 06:13:04.853433 systemd-networkd[1466]: calif03b6c04048: Gained carrier Jul 7 06:13:04.869980 containerd[1540]: 2025-07-07 06:13:04.775 [INFO][4229] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:04.869980 containerd[1540]: 2025-07-07 06:13:04.784 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0 calico-kube-controllers-5d96b9cf79- calico-system 5387ebc0-0c6e-40e8-b8c6-58824e246c67 856 0 2025-07-07 06:12:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d96b9cf79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-200-33 calico-kube-controllers-5d96b9cf79-lxh57 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif03b6c04048 [] [] }} ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-" Jul 7 06:13:04.869980 containerd[1540]: 2025-07-07 06:13:04.784 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" Jul 7 06:13:04.869980 containerd[1540]: 2025-07-07 06:13:04.810 [INFO][4240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" HandleID="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Workload="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.810 [INFO][4240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" HandleID="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Workload="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd130), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-33", "pod":"calico-kube-controllers-5d96b9cf79-lxh57", "timestamp":"2025-07-07 06:13:04.810345352 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.810 [INFO][4240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.810 [INFO][4240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.810 [INFO][4240] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.818 [INFO][4240] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" host="172-234-200-33" Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.822 [INFO][4240] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.827 [INFO][4240] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.829 [INFO][4240] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:04.870998 containerd[1540]: 2025-07-07 06:13:04.831 [INFO][4240] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:04.871175 containerd[1540]: 2025-07-07 06:13:04.831 [INFO][4240] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" host="172-234-200-33" Jul 7 06:13:04.871175 containerd[1540]: 2025-07-07 06:13:04.833 [INFO][4240] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad Jul 7 06:13:04.871175 containerd[1540]: 2025-07-07 06:13:04.836 [INFO][4240] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" host="172-234-200-33" Jul 7 06:13:04.871175 containerd[1540]: 2025-07-07 06:13:04.841 [INFO][4240] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.66/26] block=192.168.82.64/26 handle="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" host="172-234-200-33" Jul 7 06:13:04.871175 containerd[1540]: 2025-07-07 06:13:04.841 [INFO][4240] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.66/26] handle="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" host="172-234-200-33" Jul 7 06:13:04.871175 containerd[1540]: 2025-07-07 06:13:04.841 [INFO][4240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:04.871175 containerd[1540]: 2025-07-07 06:13:04.841 [INFO][4240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.66/26] IPv6=[] ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" HandleID="k8s-pod-network.da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Workload="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" Jul 7 06:13:04.872174 containerd[1540]: 2025-07-07 06:13:04.845 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0", GenerateName:"calico-kube-controllers-5d96b9cf79-", Namespace:"calico-system", SelfLink:"", UID:"5387ebc0-0c6e-40e8-b8c6-58824e246c67", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d96b9cf79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"calico-kube-controllers-5d96b9cf79-lxh57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif03b6c04048", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:04.872329 containerd[1540]: 2025-07-07 06:13:04.846 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.66/32] ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" Jul 7 06:13:04.872329 containerd[1540]: 2025-07-07 06:13:04.846 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif03b6c04048 ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" Jul 7 06:13:04.872329 containerd[1540]: 2025-07-07 06:13:04.854 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" Jul 7 06:13:04.872394 containerd[1540]: 2025-07-07 06:13:04.854 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0", GenerateName:"calico-kube-controllers-5d96b9cf79-", Namespace:"calico-system", SelfLink:"", UID:"5387ebc0-0c6e-40e8-b8c6-58824e246c67", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d96b9cf79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad", Pod:"calico-kube-controllers-5d96b9cf79-lxh57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif03b6c04048", MAC:"0e:86:2a:04:6d:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:04.872443 containerd[1540]: 2025-07-07 06:13:04.862 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" Namespace="calico-system" Pod="calico-kube-controllers-5d96b9cf79-lxh57" WorkloadEndpoint="172--234--200--33-k8s-calico--kube--controllers--5d96b9cf79--lxh57-eth0" Jul 7 06:13:04.904739 containerd[1540]: time="2025-07-07T06:13:04.904670088Z" level=info msg="connecting to shim da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad" address="unix:///run/containerd/s/5bc192ed78d195e0e64b0d29b9bdcabfa2a970d6448ade4579d224c06e7bd919" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:04.946035 systemd[1]: Started cri-containerd-da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad.scope - libcontainer container da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad. Jul 7 06:13:05.003669 containerd[1540]: time="2025-07-07T06:13:05.003513655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d96b9cf79-lxh57,Uid:5387ebc0-0c6e-40e8-b8c6-58824e246c67,Namespace:calico-system,Attempt:0,} returns sandbox id \"da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad\"" Jul 7 06:13:05.005853 containerd[1540]: time="2025-07-07T06:13:05.005759221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:13:05.919022 systemd-networkd[1466]: calif03b6c04048: Gained IPv6LL Jul 7 06:13:06.398713 containerd[1540]: time="2025-07-07T06:13:06.398017476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:06.398713 containerd[1540]: time="2025-07-07T06:13:06.398626932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:13:06.399288 containerd[1540]: time="2025-07-07T06:13:06.399247078Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:06.400455 containerd[1540]: time="2025-07-07T06:13:06.400416931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:06.401375 containerd[1540]: time="2025-07-07T06:13:06.401351316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 1.395536806s" Jul 7 06:13:06.401416 containerd[1540]: time="2025-07-07T06:13:06.401379506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:13:06.414557 containerd[1540]: time="2025-07-07T06:13:06.414104830Z" level=info msg="CreateContainer within sandbox \"da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:13:06.420887 containerd[1540]: time="2025-07-07T06:13:06.418646134Z" level=info msg="Container 48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:06.423415 containerd[1540]: time="2025-07-07T06:13:06.423354656Z" level=info msg="CreateContainer within sandbox \"da6a83ff0236b908f40d7667b381c31b2139b5ab98f1348888f5241d20b422ad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\"" Jul 7 06:13:06.424352 containerd[1540]: time="2025-07-07T06:13:06.423777413Z" level=info msg="StartContainer for \"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\"" Jul 7 06:13:06.426211 containerd[1540]: time="2025-07-07T06:13:06.426184989Z" level=info msg="connecting to shim 48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710" address="unix:///run/containerd/s/5bc192ed78d195e0e64b0d29b9bdcabfa2a970d6448ade4579d224c06e7bd919" protocol=ttrpc version=3 Jul 7 06:13:06.446960 systemd[1]: Started cri-containerd-48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710.scope - libcontainer container 48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710. Jul 7 06:13:06.494665 containerd[1540]: time="2025-07-07T06:13:06.494629325Z" level=info msg="StartContainer for \"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" returns successfully" Jul 7 06:13:06.947704 kubelet[2715]: I0707 06:13:06.947494 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d96b9cf79-lxh57" podStartSLOduration=17.550932961 podStartE2EDuration="18.947471531s" podCreationTimestamp="2025-07-07 06:12:48 +0000 UTC" firstStartedPulling="2025-07-07 06:13:05.005502942 +0000 UTC m=+34.393385942" lastFinishedPulling="2025-07-07 06:13:06.402041512 +0000 UTC m=+35.789924512" observedRunningTime="2025-07-07 06:13:06.947319122 +0000 UTC m=+36.335202122" watchObservedRunningTime="2025-07-07 06:13:06.947471531 +0000 UTC m=+36.335354531" Jul 7 06:13:07.746258 containerd[1540]: time="2025-07-07T06:13:07.746203329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2bv6,Uid:67bc0055-3690-4794-8a33-7fab9a16fcdf,Namespace:calico-system,Attempt:0,}" Jul 7 06:13:07.746639 containerd[1540]: time="2025-07-07T06:13:07.746280989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-glzkq,Uid:e24f8ad2-2eec-404f-9326-0a1a6630a383,Namespace:calico-system,Attempt:0,}" Jul 7 06:13:07.869772 systemd-networkd[1466]: cali8691d5fb27e: Link UP Jul 7 06:13:07.872565 systemd-networkd[1466]: cali8691d5fb27e: Gained carrier Jul 7 06:13:07.889031 containerd[1540]: 2025-07-07 06:13:07.782 [INFO][4409] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:07.889031 containerd[1540]: 2025-07-07 06:13:07.796 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-csi--node--driver--v2bv6-eth0 csi-node-driver- calico-system 67bc0055-3690-4794-8a33-7fab9a16fcdf 760 0 2025-07-07 06:12:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-200-33 csi-node-driver-v2bv6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8691d5fb27e [] [] }} ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-" Jul 7 06:13:07.889031 containerd[1540]: 2025-07-07 06:13:07.796 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" Jul 7 06:13:07.889031 containerd[1540]: 2025-07-07 06:13:07.830 [INFO][4431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" HandleID="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Workload="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.830 [INFO][4431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" HandleID="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Workload="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-33", "pod":"csi-node-driver-v2bv6", "timestamp":"2025-07-07 06:13:07.830271903 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.830 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.830 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.830 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.839 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" host="172-234-200-33" Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.843 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.847 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.848 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.850 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:07.889208 containerd[1540]: 2025-07-07 06:13:07.850 [INFO][4431] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" host="172-234-200-33" Jul 7 06:13:07.889412 containerd[1540]: 2025-07-07 06:13:07.852 [INFO][4431] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6 Jul 7 06:13:07.889412 containerd[1540]: 2025-07-07 06:13:07.856 [INFO][4431] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" host="172-234-200-33" Jul 7 06:13:07.889412 containerd[1540]: 2025-07-07 06:13:07.860 [INFO][4431] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.67/26] block=192.168.82.64/26 handle="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" host="172-234-200-33" Jul 7 06:13:07.889412 containerd[1540]: 2025-07-07 06:13:07.860 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.67/26] handle="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" host="172-234-200-33" Jul 7 06:13:07.889412 containerd[1540]: 2025-07-07 06:13:07.861 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:07.889412 containerd[1540]: 2025-07-07 06:13:07.861 [INFO][4431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.67/26] IPv6=[] ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" HandleID="k8s-pod-network.d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Workload="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" Jul 7 06:13:07.889519 containerd[1540]: 2025-07-07 06:13:07.865 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-csi--node--driver--v2bv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67bc0055-3690-4794-8a33-7fab9a16fcdf", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"csi-node-driver-v2bv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8691d5fb27e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:07.889614 containerd[1540]: 2025-07-07 06:13:07.865 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.67/32] ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" Jul 7 06:13:07.889614 containerd[1540]: 2025-07-07 06:13:07.865 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8691d5fb27e ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" Jul 7 06:13:07.889614 containerd[1540]: 2025-07-07 06:13:07.873 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" Jul 7 06:13:07.889673 containerd[1540]: 2025-07-07 06:13:07.873 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-csi--node--driver--v2bv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67bc0055-3690-4794-8a33-7fab9a16fcdf", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6", Pod:"csi-node-driver-v2bv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8691d5fb27e", MAC:"b6:6b:65:05:a6:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:07.889724 containerd[1540]: 2025-07-07 06:13:07.884 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" Namespace="calico-system" Pod="csi-node-driver-v2bv6" WorkloadEndpoint="172--234--200--33-k8s-csi--node--driver--v2bv6-eth0" Jul 7 06:13:07.911654 containerd[1540]: time="2025-07-07T06:13:07.911428314Z" level=info msg="connecting to shim d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6" address="unix:///run/containerd/s/7b62c5c49eff08136a30a4d9e18ea3eca11e7022de349f44a99f7c0132bdc484" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:07.938086 kubelet[2715]: I0707 06:13:07.938061 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:07.940510 systemd[1]: Started cri-containerd-d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6.scope - libcontainer container d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6. Jul 7 06:13:07.984933 systemd-networkd[1466]: cali6ab0a1e99e0: Link UP Jul 7 06:13:07.985981 systemd-networkd[1466]: cali6ab0a1e99e0: Gained carrier Jul 7 06:13:08.013479 containerd[1540]: 2025-07-07 06:13:07.784 [INFO][4407] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:08.013479 containerd[1540]: 2025-07-07 06:13:07.795 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0 goldmane-58fd7646b9- calico-system e24f8ad2-2eec-404f-9326-0a1a6630a383 862 0 2025-07-07 06:12:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-200-33 goldmane-58fd7646b9-glzkq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6ab0a1e99e0 [] [] }} ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-" Jul 7 06:13:08.013479 containerd[1540]: 2025-07-07 06:13:07.797 [INFO][4407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" Jul 7 06:13:08.013479 containerd[1540]: 2025-07-07 06:13:07.840 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" HandleID="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Workload="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.841 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" HandleID="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Workload="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d52b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-33", "pod":"goldmane-58fd7646b9-glzkq", "timestamp":"2025-07-07 06:13:07.840852465 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.841 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.860 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.860 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.939 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" host="172-234-200-33" Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.945 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.952 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.953 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.955 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:08.013651 containerd[1540]: 2025-07-07 06:13:07.955 [INFO][4436] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" host="172-234-200-33" Jul 7 06:13:08.015554 containerd[1540]: 2025-07-07 06:13:07.956 [INFO][4436] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d Jul 7 06:13:08.015554 containerd[1540]: 2025-07-07 06:13:07.960 [INFO][4436] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" host="172-234-200-33" Jul 7 06:13:08.015554 containerd[1540]: 2025-07-07 06:13:07.967 [INFO][4436] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.68/26] block=192.168.82.64/26 handle="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" host="172-234-200-33" Jul 7 06:13:08.015554 containerd[1540]: 2025-07-07 06:13:07.967 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.68/26] handle="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" host="172-234-200-33" Jul 7 06:13:08.015554 containerd[1540]: 2025-07-07 06:13:07.967 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:08.015554 containerd[1540]: 2025-07-07 06:13:07.967 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.68/26] IPv6=[] ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" HandleID="k8s-pod-network.a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Workload="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" Jul 7 06:13:08.015681 containerd[1540]: 2025-07-07 06:13:07.971 [INFO][4407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e24f8ad2-2eec-404f-9326-0a1a6630a383", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"goldmane-58fd7646b9-glzkq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ab0a1e99e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:08.015681 containerd[1540]: 2025-07-07 06:13:07.972 [INFO][4407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.68/32] ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" Jul 7 06:13:08.015768 containerd[1540]: 2025-07-07 06:13:07.972 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ab0a1e99e0 ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" Jul 7 06:13:08.015768 containerd[1540]: 2025-07-07 06:13:07.987 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" Jul 7 06:13:08.015809 containerd[1540]: 2025-07-07 06:13:07.990 [INFO][4407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e24f8ad2-2eec-404f-9326-0a1a6630a383", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d", Pod:"goldmane-58fd7646b9-glzkq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ab0a1e99e0", MAC:"72:03:db:7b:de:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:08.015959 containerd[1540]: 2025-07-07 06:13:08.009 [INFO][4407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" Namespace="calico-system" Pod="goldmane-58fd7646b9-glzkq" WorkloadEndpoint="172--234--200--33-k8s-goldmane--58fd7646b9--glzkq-eth0" Jul 7 06:13:08.019994 containerd[1540]: time="2025-07-07T06:13:08.019756051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v2bv6,Uid:67bc0055-3690-4794-8a33-7fab9a16fcdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6\"" Jul 7 06:13:08.022373 containerd[1540]: time="2025-07-07T06:13:08.022340548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:13:08.043979 containerd[1540]: time="2025-07-07T06:13:08.043951665Z" level=info msg="connecting to shim a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d" address="unix:///run/containerd/s/4b948d1a57910f11e117c9423d7c9794346d801bda47f31e47e3ad8dd14b9673" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:08.071965 systemd[1]: Started cri-containerd-a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d.scope - libcontainer container a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d. Jul 7 06:13:08.128703 containerd[1540]: time="2025-07-07T06:13:08.128664106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-glzkq,Uid:e24f8ad2-2eec-404f-9326-0a1a6630a383,Namespace:calico-system,Attempt:0,} returns sandbox id \"a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d\"" Jul 7 06:13:08.746845 kubelet[2715]: E0707 06:13:08.746473 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:08.747596 containerd[1540]: time="2025-07-07T06:13:08.747501504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8djv4,Uid:20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41,Namespace:kube-system,Attempt:0,}" Jul 7 06:13:08.748061 containerd[1540]: time="2025-07-07T06:13:08.747987211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-rg5ps,Uid:94fe1b3f-505a-4c7f-bccc-eff5407fbbb4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:13:08.876645 systemd-networkd[1466]: cali58f4694e563: Link UP Jul 7 06:13:08.878690 systemd-networkd[1466]: cali58f4694e563: Gained carrier Jul 7 06:13:08.893655 containerd[1540]: 2025-07-07 06:13:08.789 [INFO][4578] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:08.893655 containerd[1540]: 2025-07-07 06:13:08.799 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0 calico-apiserver-f64c59f69- calico-apiserver 94fe1b3f-505a-4c7f-bccc-eff5407fbbb4 864 0 2025-07-07 06:12:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f64c59f69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-33 calico-apiserver-f64c59f69-rg5ps eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58f4694e563 [] [] }} ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-" Jul 7 06:13:08.893655 containerd[1540]: 2025-07-07 06:13:08.799 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:13:08.893655 containerd[1540]: 2025-07-07 06:13:08.832 [INFO][4595] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.832 [INFO][4595] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-33", "pod":"calico-apiserver-f64c59f69-rg5ps", "timestamp":"2025-07-07 06:13:08.832015815 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.832 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.832 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.832 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.839 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" host="172-234-200-33" Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.843 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.846 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.848 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:08.893916 containerd[1540]: 2025-07-07 06:13:08.851 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:08.894141 containerd[1540]: 2025-07-07 06:13:08.851 [INFO][4595] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" host="172-234-200-33" Jul 7 06:13:08.894141 containerd[1540]: 2025-07-07 06:13:08.852 [INFO][4595] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d Jul 7 06:13:08.894141 containerd[1540]: 2025-07-07 06:13:08.855 [INFO][4595] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" host="172-234-200-33" Jul 7 06:13:08.894141 containerd[1540]: 2025-07-07 06:13:08.862 [INFO][4595] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.69/26] block=192.168.82.64/26 handle="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" host="172-234-200-33" Jul 7 06:13:08.894141 containerd[1540]: 2025-07-07 06:13:08.862 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.69/26] handle="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" host="172-234-200-33" Jul 7 06:13:08.894141 containerd[1540]: 2025-07-07 06:13:08.862 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:08.894141 containerd[1540]: 2025-07-07 06:13:08.862 [INFO][4595] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.69/26] IPv6=[] ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:13:08.894390 containerd[1540]: 2025-07-07 06:13:08.866 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0", GenerateName:"calico-apiserver-f64c59f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f64c59f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"calico-apiserver-f64c59f69-rg5ps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58f4694e563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:08.894666 containerd[1540]: 2025-07-07 06:13:08.868 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.69/32] ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:13:08.894666 containerd[1540]: 2025-07-07 06:13:08.868 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58f4694e563 ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:13:08.894666 containerd[1540]: 2025-07-07 06:13:08.879 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:13:08.895202 containerd[1540]: 2025-07-07 06:13:08.880 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0", GenerateName:"calico-apiserver-f64c59f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f64c59f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d", Pod:"calico-apiserver-f64c59f69-rg5ps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58f4694e563", MAC:"42:0b:1c:26:ec:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:08.895257 containerd[1540]: 2025-07-07 06:13:08.890 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-rg5ps" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:13:08.917963 containerd[1540]: time="2025-07-07T06:13:08.917916279Z" level=info msg="connecting to shim 8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" address="unix:///run/containerd/s/0ab843e8e167d0256646d39dd0a8e78655f02dfad4c25fdd7ea10e1c1a04f58c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:08.964959 systemd[1]: Started cri-containerd-8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d.scope - libcontainer container 8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d. Jul 7 06:13:08.988396 systemd-networkd[1466]: cali7767b4feada: Link UP Jul 7 06:13:08.990379 systemd-networkd[1466]: cali7767b4feada: Gained carrier Jul 7 06:13:09.020039 containerd[1540]: 2025-07-07 06:13:08.785 [INFO][4568] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:09.020039 containerd[1540]: 2025-07-07 06:13:08.797 [INFO][4568] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0 coredns-7c65d6cfc9- kube-system 20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41 859 0 2025-07-07 06:12:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-200-33 coredns-7c65d6cfc9-8djv4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7767b4feada [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-" Jul 7 06:13:09.020039 containerd[1540]: 2025-07-07 06:13:08.797 [INFO][4568] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" Jul 7 06:13:09.020039 containerd[1540]: 2025-07-07 06:13:08.840 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" HandleID="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Workload="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.840 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" HandleID="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Workload="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-200-33", "pod":"coredns-7c65d6cfc9-8djv4", "timestamp":"2025-07-07 06:13:08.840026234 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.840 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.862 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.863 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.940 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" host="172-234-200-33" Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.951 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.959 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.962 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.966 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:09.020220 containerd[1540]: 2025-07-07 06:13:08.966 [INFO][4593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" host="172-234-200-33" Jul 7 06:13:09.020410 containerd[1540]: 2025-07-07 06:13:08.968 [INFO][4593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1 Jul 7 06:13:09.020410 containerd[1540]: 2025-07-07 06:13:08.974 [INFO][4593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" host="172-234-200-33" Jul 7 06:13:09.020410 containerd[1540]: 2025-07-07 06:13:08.981 [INFO][4593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.70/26] block=192.168.82.64/26 handle="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" host="172-234-200-33" Jul 7 06:13:09.020410 containerd[1540]: 2025-07-07 06:13:08.981 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.70/26] handle="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" host="172-234-200-33" Jul 7 06:13:09.020410 containerd[1540]: 2025-07-07 06:13:08.981 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:09.020410 containerd[1540]: 2025-07-07 06:13:08.981 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.70/26] IPv6=[] ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" HandleID="k8s-pod-network.10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Workload="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" Jul 7 06:13:09.020519 containerd[1540]: 2025-07-07 06:13:08.985 [INFO][4568] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"coredns-7c65d6cfc9-8djv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7767b4feada", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:09.020519 containerd[1540]: 2025-07-07 06:13:08.985 [INFO][4568] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.70/32] ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" Jul 7 06:13:09.020519 containerd[1540]: 2025-07-07 06:13:08.985 [INFO][4568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7767b4feada ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" Jul 7 06:13:09.020519 containerd[1540]: 2025-07-07 06:13:08.991 [INFO][4568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" Jul 7 06:13:09.020519 containerd[1540]: 2025-07-07 06:13:08.993 [INFO][4568] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1", Pod:"coredns-7c65d6cfc9-8djv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7767b4feada", MAC:"de:9d:94:ba:eb:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:09.020519 containerd[1540]: 2025-07-07 06:13:09.010 [INFO][4568] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8djv4" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--8djv4-eth0" Jul 7 06:13:09.049626 containerd[1540]: time="2025-07-07T06:13:09.049566902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-rg5ps,Uid:94fe1b3f-505a-4c7f-bccc-eff5407fbbb4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\"" Jul 7 06:13:09.055788 containerd[1540]: time="2025-07-07T06:13:09.055544973Z" level=info msg="connecting to shim 10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1" address="unix:///run/containerd/s/4001c2ac4443ca3afe78cf9c6d26a0a8f3ceaebe3b512fd040fdf14da59ab365" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:09.098048 systemd[1]: Started cri-containerd-10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1.scope - libcontainer container 10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1. Jul 7 06:13:09.201048 containerd[1540]: time="2025-07-07T06:13:09.201000385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8djv4,Uid:20b9fb2d-4d88-4e4f-9147-8c3eb5c02c41,Namespace:kube-system,Attempt:0,} returns sandbox id \"10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1\"" Jul 7 06:13:09.202739 kubelet[2715]: E0707 06:13:09.202704 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:09.206028 containerd[1540]: time="2025-07-07T06:13:09.206001120Z" level=info msg="CreateContainer within sandbox \"10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:13:09.226160 containerd[1540]: time="2025-07-07T06:13:09.226042533Z" level=info msg="Container 5e5ac9ef632670873eaf78d8df3af591cd7781dc547c61c8ca41c5ead3bec643: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:09.236235 containerd[1540]: time="2025-07-07T06:13:09.236198593Z" level=info msg="CreateContainer within sandbox \"10f899b85e75188cdb9847e597c58470295310c1de887cecfba9aecf67886dd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e5ac9ef632670873eaf78d8df3af591cd7781dc547c61c8ca41c5ead3bec643\"" Jul 7 06:13:09.237503 containerd[1540]: time="2025-07-07T06:13:09.237321098Z" level=info msg="StartContainer for \"5e5ac9ef632670873eaf78d8df3af591cd7781dc547c61c8ca41c5ead3bec643\"" Jul 7 06:13:09.240345 containerd[1540]: time="2025-07-07T06:13:09.240083585Z" level=info msg="connecting to shim 5e5ac9ef632670873eaf78d8df3af591cd7781dc547c61c8ca41c5ead3bec643" address="unix:///run/containerd/s/4001c2ac4443ca3afe78cf9c6d26a0a8f3ceaebe3b512fd040fdf14da59ab365" protocol=ttrpc version=3 Jul 7 06:13:09.246008 systemd-networkd[1466]: cali8691d5fb27e: Gained IPv6LL Jul 7 06:13:09.287965 systemd[1]: Started cri-containerd-5e5ac9ef632670873eaf78d8df3af591cd7781dc547c61c8ca41c5ead3bec643.scope - libcontainer container 5e5ac9ef632670873eaf78d8df3af591cd7781dc547c61c8ca41c5ead3bec643. Jul 7 06:13:09.336524 containerd[1540]: time="2025-07-07T06:13:09.336455166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:09.340778 containerd[1540]: time="2025-07-07T06:13:09.340755635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:13:09.341344 containerd[1540]: time="2025-07-07T06:13:09.341308812Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:09.347127 containerd[1540]: time="2025-07-07T06:13:09.347020744Z" level=info msg="StartContainer for \"5e5ac9ef632670873eaf78d8df3af591cd7781dc547c61c8ca41c5ead3bec643\" returns successfully" Jul 7 06:13:09.347844 containerd[1540]: time="2025-07-07T06:13:09.347762011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:09.350759 containerd[1540]: time="2025-07-07T06:13:09.350688656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.327744062s" Jul 7 06:13:09.350759 containerd[1540]: time="2025-07-07T06:13:09.350713826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:13:09.352845 containerd[1540]: time="2025-07-07T06:13:09.352425628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:13:09.354527 containerd[1540]: time="2025-07-07T06:13:09.354114290Z" level=info msg="CreateContainer within sandbox \"d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:13:09.367399 containerd[1540]: time="2025-07-07T06:13:09.367380005Z" level=info msg="Container ca3ca299b50f52448cc8d99d28878314156adeac23c9cac0323c748c2b447480: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:09.374699 containerd[1540]: time="2025-07-07T06:13:09.374679340Z" level=info msg="CreateContainer within sandbox \"d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ca3ca299b50f52448cc8d99d28878314156adeac23c9cac0323c748c2b447480\"" Jul 7 06:13:09.377208 containerd[1540]: time="2025-07-07T06:13:09.377114558Z" level=info msg="StartContainer for \"ca3ca299b50f52448cc8d99d28878314156adeac23c9cac0323c748c2b447480\"" Jul 7 06:13:09.381555 containerd[1540]: time="2025-07-07T06:13:09.381526926Z" level=info msg="connecting to shim ca3ca299b50f52448cc8d99d28878314156adeac23c9cac0323c748c2b447480" address="unix:///run/containerd/s/7b62c5c49eff08136a30a4d9e18ea3eca11e7022de349f44a99f7c0132bdc484" protocol=ttrpc version=3 Jul 7 06:13:09.416159 systemd[1]: Started cri-containerd-ca3ca299b50f52448cc8d99d28878314156adeac23c9cac0323c748c2b447480.scope - libcontainer container ca3ca299b50f52448cc8d99d28878314156adeac23c9cac0323c748c2b447480. Jul 7 06:13:09.475758 containerd[1540]: time="2025-07-07T06:13:09.475715528Z" level=info msg="StartContainer for \"ca3ca299b50f52448cc8d99d28878314156adeac23c9cac0323c748c2b447480\" returns successfully" Jul 7 06:13:09.567998 systemd-networkd[1466]: cali6ab0a1e99e0: Gained IPv6LL Jul 7 06:13:09.745907 kubelet[2715]: E0707 06:13:09.745868 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:09.746631 containerd[1540]: time="2025-07-07T06:13:09.746585740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-544ddc8dd6-m8cjm,Uid:43ff4723-2d17-47d6-a685-c6e35a5c21ea,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:13:09.747005 containerd[1540]: time="2025-07-07T06:13:09.746965958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-9hhmk,Uid:eea1d1cc-704d-4bb8-8684-4c28eeac74a0,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:13:09.748332 containerd[1540]: time="2025-07-07T06:13:09.748297761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqrrz,Uid:50095d4e-5ad9-4407-9d8b-ae5276b53aa1,Namespace:kube-system,Attempt:0,}" Jul 7 06:13:09.908463 systemd-networkd[1466]: calicaf98e9acf6: Link UP Jul 7 06:13:09.910002 systemd-networkd[1466]: calicaf98e9acf6: Gained carrier Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.810 [INFO][4804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.819 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0 calico-apiserver-544ddc8dd6- calico-apiserver 43ff4723-2d17-47d6-a685-c6e35a5c21ea 861 0 2025-07-07 06:12:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:544ddc8dd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-33 calico-apiserver-544ddc8dd6-m8cjm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicaf98e9acf6 [] [] }} ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.819 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.854 [INFO][4836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" HandleID="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Workload="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.854 [INFO][4836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" HandleID="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Workload="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-33", "pod":"calico-apiserver-544ddc8dd6-m8cjm", "timestamp":"2025-07-07 06:13:09.854503535 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.854 [INFO][4836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.854 [INFO][4836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.854 [INFO][4836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.862 [INFO][4836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.866 [INFO][4836] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.872 [INFO][4836] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.876 [INFO][4836] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.879 [INFO][4836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.879 [INFO][4836] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.881 [INFO][4836] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.889 [INFO][4836] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.896 [INFO][4836] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.71/26] block=192.168.82.64/26 handle="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.897 [INFO][4836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.71/26] handle="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" host="172-234-200-33" Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.897 [INFO][4836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:09.921708 containerd[1540]: 2025-07-07 06:13:09.897 [INFO][4836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.71/26] IPv6=[] ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" HandleID="k8s-pod-network.5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Workload="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" Jul 7 06:13:09.922228 containerd[1540]: 2025-07-07 06:13:09.903 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0", GenerateName:"calico-apiserver-544ddc8dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"43ff4723-2d17-47d6-a685-c6e35a5c21ea", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"544ddc8dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"calico-apiserver-544ddc8dd6-m8cjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicaf98e9acf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:09.922228 containerd[1540]: 2025-07-07 06:13:09.903 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.71/32] ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" Jul 7 06:13:09.922228 containerd[1540]: 2025-07-07 06:13:09.903 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaf98e9acf6 ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" Jul 7 06:13:09.922228 containerd[1540]: 2025-07-07 06:13:09.909 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" Jul 7 06:13:09.922228 containerd[1540]: 2025-07-07 06:13:09.909 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0", GenerateName:"calico-apiserver-544ddc8dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"43ff4723-2d17-47d6-a685-c6e35a5c21ea", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"544ddc8dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b", Pod:"calico-apiserver-544ddc8dd6-m8cjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicaf98e9acf6", MAC:"d2:fe:86:94:cc:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:09.922228 containerd[1540]: 2025-07-07 06:13:09.918 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-m8cjm" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--m8cjm-eth0" Jul 7 06:13:09.945583 containerd[1540]: time="2025-07-07T06:13:09.945521752Z" level=info msg="connecting to shim 5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b" address="unix:///run/containerd/s/d99e261a9e32ce8b1df7b8365939f4dd2783d085e18a00fed6f4d9d7de2d9da0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:09.964086 kubelet[2715]: E0707 06:13:09.963699 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:09.977025 kubelet[2715]: I0707 06:13:09.976695 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8djv4" podStartSLOduration=33.97668218 podStartE2EDuration="33.97668218s" podCreationTimestamp="2025-07-07 06:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:13:09.976148563 +0000 UTC m=+39.364031563" watchObservedRunningTime="2025-07-07 06:13:09.97668218 +0000 UTC m=+39.364565180" Jul 7 06:13:09.990109 systemd[1]: Started cri-containerd-5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b.scope - libcontainer container 5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b. Jul 7 06:13:10.033369 systemd-networkd[1466]: cali0e56d87daa0: Link UP Jul 7 06:13:10.036221 systemd-networkd[1466]: cali0e56d87daa0: Gained carrier Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.822 [INFO][4803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.847 [INFO][4803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0 calico-apiserver-f64c59f69- calico-apiserver eea1d1cc-704d-4bb8-8684-4c28eeac74a0 863 0 2025-07-07 06:12:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f64c59f69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-33 calico-apiserver-f64c59f69-9hhmk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0e56d87daa0 [] [] }} ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.847 [INFO][4803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.891 [INFO][4846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.892 [INFO][4846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002acac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-33", "pod":"calico-apiserver-f64c59f69-9hhmk", "timestamp":"2025-07-07 06:13:09.891776623 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.892 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.897 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.897 [INFO][4846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.963 [INFO][4846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.974 [INFO][4846] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.988 [INFO][4846] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:09.993 [INFO][4846] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.002 [INFO][4846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.003 [INFO][4846] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.006 [INFO][4846] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.013 [INFO][4846] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.018 [INFO][4846] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.72/26] block=192.168.82.64/26 handle="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.018 [INFO][4846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.72/26] handle="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" host="172-234-200-33" Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.018 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:10.058672 containerd[1540]: 2025-07-07 06:13:10.018 [INFO][4846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.72/26] IPv6=[] ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:10.059526 containerd[1540]: 2025-07-07 06:13:10.024 [INFO][4803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0", GenerateName:"calico-apiserver-f64c59f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"eea1d1cc-704d-4bb8-8684-4c28eeac74a0", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f64c59f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"calico-apiserver-f64c59f69-9hhmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e56d87daa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:10.059526 containerd[1540]: 2025-07-07 06:13:10.024 [INFO][4803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.72/32] ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:10.059526 containerd[1540]: 2025-07-07 06:13:10.024 [INFO][4803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e56d87daa0 ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:10.059526 containerd[1540]: 2025-07-07 06:13:10.036 [INFO][4803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:10.059526 containerd[1540]: 2025-07-07 06:13:10.038 [INFO][4803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0", GenerateName:"calico-apiserver-f64c59f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"eea1d1cc-704d-4bb8-8684-4c28eeac74a0", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f64c59f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe", Pod:"calico-apiserver-f64c59f69-9hhmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e56d87daa0", MAC:"12:84:0c:d8:a9:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:10.059526 containerd[1540]: 2025-07-07 06:13:10.052 [INFO][4803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Namespace="calico-apiserver" Pod="calico-apiserver-f64c59f69-9hhmk" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:10.077997 systemd-networkd[1466]: cali58f4694e563: Gained IPv6LL Jul 7 06:13:10.132422 containerd[1540]: time="2025-07-07T06:13:10.132387552Z" level=info msg="connecting to shim 945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" address="unix:///run/containerd/s/74cb3cfa9defaf0953b03a39203f0a2e06ecc234440e4b8dfc92535cdbd6856f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:10.159923 systemd-networkd[1466]: cali01e87d9af27: Link UP Jul 7 06:13:10.161964 systemd-networkd[1466]: cali01e87d9af27: Gained carrier Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:09.811 [INFO][4796] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:09.841 [INFO][4796] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0 coredns-7c65d6cfc9- kube-system 50095d4e-5ad9-4407-9d8b-ae5276b53aa1 851 0 2025-07-07 06:12:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-200-33 coredns-7c65d6cfc9-fqrrz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01e87d9af27 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:09.841 [INFO][4796] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:09.903 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" HandleID="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Workload="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:09.904 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" HandleID="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Workload="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102a50), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-200-33", "pod":"coredns-7c65d6cfc9-fqrrz", "timestamp":"2025-07-07 06:13:09.903785065 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:09.904 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.019 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.019 [INFO][4844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.063 [INFO][4844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.074 [INFO][4844] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.081 [INFO][4844] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.084 [INFO][4844] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.089 [INFO][4844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.089 [INFO][4844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.091 [INFO][4844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.104 [INFO][4844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.141 [INFO][4844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.73/26] block=192.168.82.64/26 handle="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.141 [INFO][4844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.73/26] handle="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" host="172-234-200-33" Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.142 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:10.205372 containerd[1540]: 2025-07-07 06:13:10.142 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.73/26] IPv6=[] ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" HandleID="k8s-pod-network.e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Workload="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" Jul 7 06:13:10.205821 containerd[1540]: 2025-07-07 06:13:10.150 [INFO][4796] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"50095d4e-5ad9-4407-9d8b-ae5276b53aa1", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"coredns-7c65d6cfc9-fqrrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01e87d9af27", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:10.205821 containerd[1540]: 2025-07-07 06:13:10.150 [INFO][4796] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.73/32] ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" Jul 7 06:13:10.205821 containerd[1540]: 2025-07-07 06:13:10.150 [INFO][4796] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01e87d9af27 ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" Jul 7 06:13:10.205821 containerd[1540]: 2025-07-07 06:13:10.163 [INFO][4796] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" Jul 7 06:13:10.205821 containerd[1540]: 2025-07-07 06:13:10.164 [INFO][4796] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"50095d4e-5ad9-4407-9d8b-ae5276b53aa1", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c", Pod:"coredns-7c65d6cfc9-fqrrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01e87d9af27", MAC:"06:57:b1:54:82:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:10.205821 containerd[1540]: 2025-07-07 06:13:10.189 [INFO][4796] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqrrz" WorkloadEndpoint="172--234--200--33-k8s-coredns--7c65d6cfc9--fqrrz-eth0" Jul 7 06:13:10.210008 systemd[1]: Started cri-containerd-945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe.scope - libcontainer container 945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe. Jul 7 06:13:10.256305 containerd[1540]: time="2025-07-07T06:13:10.256260727Z" level=info msg="connecting to shim e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c" address="unix:///run/containerd/s/42941aaadc9824a263e9287dfce71d99b0ecdffc5a23ac6157355057bef534e5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:10.326006 systemd[1]: Started cri-containerd-e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c.scope - libcontainer container e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c. Jul 7 06:13:10.378899 containerd[1540]: time="2025-07-07T06:13:10.378871438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-544ddc8dd6-m8cjm,Uid:43ff4723-2d17-47d6-a685-c6e35a5c21ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b\"" Jul 7 06:13:10.442176 containerd[1540]: time="2025-07-07T06:13:10.442080449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f64c59f69-9hhmk,Uid:eea1d1cc-704d-4bb8-8684-4c28eeac74a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\"" Jul 7 06:13:10.465175 containerd[1540]: time="2025-07-07T06:13:10.465152264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqrrz,Uid:50095d4e-5ad9-4407-9d8b-ae5276b53aa1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c\"" Jul 7 06:13:10.466698 kubelet[2715]: E0707 06:13:10.466660 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:10.472028 containerd[1540]: time="2025-07-07T06:13:10.471972583Z" level=info msg="CreateContainer within sandbox \"e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:13:10.492403 containerd[1540]: time="2025-07-07T06:13:10.492316790Z" level=info msg="Container d2bf90c69572c78a6427e95e5d27fd8cb01abd678b0688d11cd088e37465d4bb: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:10.500162 containerd[1540]: time="2025-07-07T06:13:10.500130055Z" level=info msg="CreateContainer within sandbox \"e07d0013a96f8884d58e6d0edf0b8f7925e48924f2ca297b2b54df923070997c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2bf90c69572c78a6427e95e5d27fd8cb01abd678b0688d11cd088e37465d4bb\"" Jul 7 06:13:10.501066 containerd[1540]: time="2025-07-07T06:13:10.501032130Z" level=info msg="StartContainer for \"d2bf90c69572c78a6427e95e5d27fd8cb01abd678b0688d11cd088e37465d4bb\"" Jul 7 06:13:10.501795 containerd[1540]: time="2025-07-07T06:13:10.501719177Z" level=info msg="connecting to shim d2bf90c69572c78a6427e95e5d27fd8cb01abd678b0688d11cd088e37465d4bb" address="unix:///run/containerd/s/42941aaadc9824a263e9287dfce71d99b0ecdffc5a23ac6157355057bef534e5" protocol=ttrpc version=3 Jul 7 06:13:10.548204 systemd[1]: Started cri-containerd-d2bf90c69572c78a6427e95e5d27fd8cb01abd678b0688d11cd088e37465d4bb.scope - libcontainer container d2bf90c69572c78a6427e95e5d27fd8cb01abd678b0688d11cd088e37465d4bb. Jul 7 06:13:10.625558 containerd[1540]: time="2025-07-07T06:13:10.625518073Z" level=info msg="StartContainer for \"d2bf90c69572c78a6427e95e5d27fd8cb01abd678b0688d11cd088e37465d4bb\" returns successfully" Jul 7 06:13:10.803484 kubelet[2715]: I0707 06:13:10.802349 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:10.803484 kubelet[2715]: E0707 06:13:10.802588 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:10.847134 systemd-networkd[1466]: cali7767b4feada: Gained IPv6LL Jul 7 06:13:10.973360 kubelet[2715]: E0707 06:13:10.973332 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:10.973906 kubelet[2715]: E0707 06:13:10.973893 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:10.976009 kubelet[2715]: E0707 06:13:10.975981 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:10.990929 kubelet[2715]: I0707 06:13:10.990740 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fqrrz" podStartSLOduration=34.990728966 podStartE2EDuration="34.990728966s" podCreationTimestamp="2025-07-07 06:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:13:10.990175159 +0000 UTC m=+40.378058159" watchObservedRunningTime="2025-07-07 06:13:10.990728966 +0000 UTC m=+40.378611966" Jul 7 06:13:11.145995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397186263.mount: Deactivated successfully. Jul 7 06:13:11.360032 systemd-networkd[1466]: cali0e56d87daa0: Gained IPv6LL Jul 7 06:13:11.485938 systemd-networkd[1466]: cali01e87d9af27: Gained IPv6LL Jul 7 06:13:11.695068 containerd[1540]: time="2025-07-07T06:13:11.694997021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:11.699921 containerd[1540]: time="2025-07-07T06:13:11.699880591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:13:11.700093 containerd[1540]: time="2025-07-07T06:13:11.700021000Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:11.711947 containerd[1540]: time="2025-07-07T06:13:11.711805790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:11.715216 containerd[1540]: time="2025-07-07T06:13:11.712322317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.359568111s" Jul 7 06:13:11.715273 containerd[1540]: time="2025-07-07T06:13:11.715217355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:13:11.718557 containerd[1540]: time="2025-07-07T06:13:11.718528611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:13:11.721891 containerd[1540]: time="2025-07-07T06:13:11.721096170Z" level=info msg="CreateContainer within sandbox \"a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:13:11.731086 containerd[1540]: time="2025-07-07T06:13:11.731055107Z" level=info msg="Container edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:11.740721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917278321.mount: Deactivated successfully. Jul 7 06:13:11.747634 containerd[1540]: time="2025-07-07T06:13:11.747596316Z" level=info msg="CreateContainer within sandbox \"a076d0969a926765fa1918adf40e0247b9afd6db67e15bb6455b585ee2cd636d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\"" Jul 7 06:13:11.748510 containerd[1540]: time="2025-07-07T06:13:11.748487743Z" level=info msg="StartContainer for \"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\"" Jul 7 06:13:11.750658 containerd[1540]: time="2025-07-07T06:13:11.750622543Z" level=info msg="connecting to shim edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c" address="unix:///run/containerd/s/4b948d1a57910f11e117c9423d7c9794346d801bda47f31e47e3ad8dd14b9673" protocol=ttrpc version=3 Jul 7 06:13:11.791705 systemd[1]: Started cri-containerd-edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c.scope - libcontainer container edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c. Jul 7 06:13:11.877044 containerd[1540]: time="2025-07-07T06:13:11.876999433Z" level=info msg="StartContainer for \"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" returns successfully" Jul 7 06:13:11.934974 systemd-networkd[1466]: calicaf98e9acf6: Gained IPv6LL Jul 7 06:13:11.985243 kubelet[2715]: E0707 06:13:11.985160 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:11.986073 kubelet[2715]: E0707 06:13:11.986003 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:12.234639 systemd-networkd[1466]: vxlan.calico: Link UP Jul 7 06:13:12.234647 systemd-networkd[1466]: vxlan.calico: Gained carrier Jul 7 06:13:12.987767 kubelet[2715]: E0707 06:13:12.987720 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:12.990387 kubelet[2715]: I0707 06:13:12.989967 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:13.487669 containerd[1540]: time="2025-07-07T06:13:13.487623544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:13.488953 containerd[1540]: time="2025-07-07T06:13:13.488913529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:13:13.492696 containerd[1540]: time="2025-07-07T06:13:13.492329996Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:13.494289 containerd[1540]: time="2025-07-07T06:13:13.494265659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:13.495093 containerd[1540]: time="2025-07-07T06:13:13.495059336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.776466775s" Jul 7 06:13:13.495149 containerd[1540]: time="2025-07-07T06:13:13.495098755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:13:13.496852 containerd[1540]: time="2025-07-07T06:13:13.496792769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:13:13.498848 containerd[1540]: time="2025-07-07T06:13:13.498795842Z" level=info msg="CreateContainer within sandbox \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:13:13.507588 containerd[1540]: time="2025-07-07T06:13:13.507039811Z" level=info msg="Container fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:13.525492 containerd[1540]: time="2025-07-07T06:13:13.525446671Z" level=info msg="CreateContainer within sandbox \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\"" Jul 7 06:13:13.526051 containerd[1540]: time="2025-07-07T06:13:13.525938510Z" level=info msg="StartContainer for \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\"" Jul 7 06:13:13.527396 containerd[1540]: time="2025-07-07T06:13:13.527360554Z" level=info msg="connecting to shim fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03" address="unix:///run/containerd/s/0ab843e8e167d0256646d39dd0a8e78655f02dfad4c25fdd7ea10e1c1a04f58c" protocol=ttrpc version=3 Jul 7 06:13:13.555021 systemd[1]: Started cri-containerd-fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03.scope - libcontainer container fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03. Jul 7 06:13:13.624497 containerd[1540]: time="2025-07-07T06:13:13.624466839Z" level=info msg="StartContainer for \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" returns successfully" Jul 7 06:13:13.848798 kubelet[2715]: I0707 06:13:13.848544 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:13.902114 containerd[1540]: time="2025-07-07T06:13:13.902079856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"cd6366658fa5cd8a9620fd4a2dd4725346cb099e04c17977ddc8ea23575b73ae\" pid:5316 exited_at:{seconds:1751868793 nanos:899111637}" Jul 7 06:13:13.915850 kubelet[2715]: I0707 06:13:13.915351 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-glzkq" podStartSLOduration=23.330266728 podStartE2EDuration="26.915334816s" podCreationTimestamp="2025-07-07 06:12:47 +0000 UTC" firstStartedPulling="2025-07-07 06:13:08.131114413 +0000 UTC m=+37.518997413" lastFinishedPulling="2025-07-07 06:13:11.716182501 +0000 UTC m=+41.104065501" observedRunningTime="2025-07-07 06:13:11.99917171 +0000 UTC m=+41.387054710" watchObservedRunningTime="2025-07-07 06:13:13.915334816 +0000 UTC m=+43.303217816" Jul 7 06:13:13.956550 containerd[1540]: time="2025-07-07T06:13:13.956508791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"dff951ad7734cfd7fc526b9e93b0656b326947b9c64b48275b52c8ccf67df00a\" pid:5338 exited_at:{seconds:1751868793 nanos:956126122}" Jul 7 06:13:14.004872 kubelet[2715]: I0707 06:13:14.004244 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f64c59f69-rg5ps" podStartSLOduration=25.560377449 podStartE2EDuration="30.004229492s" podCreationTimestamp="2025-07-07 06:12:44 +0000 UTC" firstStartedPulling="2025-07-07 06:13:09.052161819 +0000 UTC m=+38.440044819" lastFinishedPulling="2025-07-07 06:13:13.496013862 +0000 UTC m=+42.883896862" observedRunningTime="2025-07-07 06:13:14.003990683 +0000 UTC m=+43.391873683" watchObservedRunningTime="2025-07-07 06:13:14.004229492 +0000 UTC m=+43.392112502" Jul 7 06:13:14.048146 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL Jul 7 06:13:14.498458 containerd[1540]: time="2025-07-07T06:13:14.498387321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:14.499684 containerd[1540]: time="2025-07-07T06:13:14.499276188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:13:14.500253 containerd[1540]: time="2025-07-07T06:13:14.500208074Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:14.501967 containerd[1540]: time="2025-07-07T06:13:14.501938068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:14.502967 containerd[1540]: time="2025-07-07T06:13:14.502657416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.005835027s" Jul 7 06:13:14.503145 containerd[1540]: time="2025-07-07T06:13:14.503056594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:13:14.504540 containerd[1540]: time="2025-07-07T06:13:14.504506939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:13:14.506989 containerd[1540]: time="2025-07-07T06:13:14.506187713Z" level=info msg="CreateContainer within sandbox \"d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:13:14.516251 containerd[1540]: time="2025-07-07T06:13:14.516220328Z" level=info msg="Container c78176bb9add916b8ca52170caae8a728a0c63045536e1d50b1989f1e9836020: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:14.522560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893994993.mount: Deactivated successfully. Jul 7 06:13:14.549849 containerd[1540]: time="2025-07-07T06:13:14.547519148Z" level=info msg="CreateContainer within sandbox \"d552c678c6d9576ab94ed4cd63418d8deb99527f7dcc81d1adb583e0bee7d9e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c78176bb9add916b8ca52170caae8a728a0c63045536e1d50b1989f1e9836020\"" Jul 7 06:13:14.550746 containerd[1540]: time="2025-07-07T06:13:14.550653557Z" level=info msg="StartContainer for \"c78176bb9add916b8ca52170caae8a728a0c63045536e1d50b1989f1e9836020\"" Jul 7 06:13:14.552864 containerd[1540]: time="2025-07-07T06:13:14.552775689Z" level=info msg="connecting to shim c78176bb9add916b8ca52170caae8a728a0c63045536e1d50b1989f1e9836020" address="unix:///run/containerd/s/7b62c5c49eff08136a30a4d9e18ea3eca11e7022de349f44a99f7c0132bdc484" protocol=ttrpc version=3 Jul 7 06:13:14.591081 systemd[1]: Started cri-containerd-c78176bb9add916b8ca52170caae8a728a0c63045536e1d50b1989f1e9836020.scope - libcontainer container c78176bb9add916b8ca52170caae8a728a0c63045536e1d50b1989f1e9836020. Jul 7 06:13:14.655207 containerd[1540]: time="2025-07-07T06:13:14.655140018Z" level=info msg="StartContainer for \"c78176bb9add916b8ca52170caae8a728a0c63045536e1d50b1989f1e9836020\" returns successfully" Jul 7 06:13:14.705806 containerd[1540]: time="2025-07-07T06:13:14.705760210Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:14.706564 containerd[1540]: time="2025-07-07T06:13:14.706533177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:13:14.708361 containerd[1540]: time="2025-07-07T06:13:14.708317061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 203.778662ms" Jul 7 06:13:14.708403 containerd[1540]: time="2025-07-07T06:13:14.708344711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:13:14.710110 containerd[1540]: time="2025-07-07T06:13:14.710077065Z" level=info msg="CreateContainer within sandbox \"5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:13:14.710460 containerd[1540]: time="2025-07-07T06:13:14.710438594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:13:14.719857 containerd[1540]: time="2025-07-07T06:13:14.715823685Z" level=info msg="Container 514c303f7e7646cc2313c1a7166d3c8e2d25692de7416367522fb7c5e704b887: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:14.727576 containerd[1540]: time="2025-07-07T06:13:14.727533263Z" level=info msg="CreateContainer within sandbox \"5f164e24ecec3fc7fe3ab891f1a226005feaef2c92cbdbb2d2cd7c461b52642b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"514c303f7e7646cc2313c1a7166d3c8e2d25692de7416367522fb7c5e704b887\"" Jul 7 06:13:14.728390 containerd[1540]: time="2025-07-07T06:13:14.728235081Z" level=info msg="StartContainer for \"514c303f7e7646cc2313c1a7166d3c8e2d25692de7416367522fb7c5e704b887\"" Jul 7 06:13:14.734937 containerd[1540]: time="2025-07-07T06:13:14.734886627Z" level=info msg="connecting to shim 514c303f7e7646cc2313c1a7166d3c8e2d25692de7416367522fb7c5e704b887" address="unix:///run/containerd/s/d99e261a9e32ce8b1df7b8365939f4dd2783d085e18a00fed6f4d9d7de2d9da0" protocol=ttrpc version=3 Jul 7 06:13:14.763495 systemd[1]: Started cri-containerd-514c303f7e7646cc2313c1a7166d3c8e2d25692de7416367522fb7c5e704b887.scope - libcontainer container 514c303f7e7646cc2313c1a7166d3c8e2d25692de7416367522fb7c5e704b887. Jul 7 06:13:14.833866 kubelet[2715]: I0707 06:13:14.833708 2715 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:13:14.834075 kubelet[2715]: I0707 06:13:14.833970 2715 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:13:14.838248 containerd[1540]: time="2025-07-07T06:13:14.838204663Z" level=info msg="StartContainer for \"514c303f7e7646cc2313c1a7166d3c8e2d25692de7416367522fb7c5e704b887\" returns successfully" Jul 7 06:13:14.881752 containerd[1540]: time="2025-07-07T06:13:14.881710610Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:14.882738 containerd[1540]: time="2025-07-07T06:13:14.882701587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:13:14.883245 kubelet[2715]: I0707 06:13:14.883149 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:14.885724 containerd[1540]: time="2025-07-07T06:13:14.885657816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 175.194773ms" Jul 7 06:13:14.885724 containerd[1540]: time="2025-07-07T06:13:14.885710266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:13:14.908272 containerd[1540]: time="2025-07-07T06:13:14.907230580Z" level=info msg="CreateContainer within sandbox \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:13:14.919522 containerd[1540]: time="2025-07-07T06:13:14.919469517Z" level=info msg="Container 28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:14.927855 containerd[1540]: time="2025-07-07T06:13:14.927363659Z" level=info msg="CreateContainer within sandbox \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\"" Jul 7 06:13:14.929481 containerd[1540]: time="2025-07-07T06:13:14.929433872Z" level=info msg="StartContainer for \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\"" Jul 7 06:13:14.930945 containerd[1540]: time="2025-07-07T06:13:14.930907567Z" level=info msg="connecting to shim 28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358" address="unix:///run/containerd/s/74cb3cfa9defaf0953b03a39203f0a2e06ecc234440e4b8dfc92535cdbd6856f" protocol=ttrpc version=3 Jul 7 06:13:14.963076 systemd[1]: Started cri-containerd-28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358.scope - libcontainer container 28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358. Jul 7 06:13:15.015582 kubelet[2715]: I0707 06:13:15.015103 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-544ddc8dd6-m8cjm" podStartSLOduration=25.686034437 podStartE2EDuration="30.015088543s" podCreationTimestamp="2025-07-07 06:12:45 +0000 UTC" firstStartedPulling="2025-07-07 06:13:10.379933723 +0000 UTC m=+39.767816723" lastFinishedPulling="2025-07-07 06:13:14.708987819 +0000 UTC m=+44.096870829" observedRunningTime="2025-07-07 06:13:15.014979443 +0000 UTC m=+44.402862443" watchObservedRunningTime="2025-07-07 06:13:15.015088543 +0000 UTC m=+44.402971543" Jul 7 06:13:15.033952 kubelet[2715]: I0707 06:13:15.033625 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v2bv6" podStartSLOduration=20.55162667 podStartE2EDuration="27.033608722s" podCreationTimestamp="2025-07-07 06:12:48 +0000 UTC" firstStartedPulling="2025-07-07 06:13:08.02178382 +0000 UTC m=+37.409666820" lastFinishedPulling="2025-07-07 06:13:14.503765872 +0000 UTC m=+43.891648872" observedRunningTime="2025-07-07 06:13:15.030735231 +0000 UTC m=+44.418618241" watchObservedRunningTime="2025-07-07 06:13:15.033608722 +0000 UTC m=+44.421491722" Jul 7 06:13:15.095774 containerd[1540]: time="2025-07-07T06:13:15.095728097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"f4db9989c299c972e9093314887774bc9b64160e0dd8cd7212c2635ebd23246e\" pid:5427 exit_status:1 exited_at:{seconds:1751868795 nanos:92360338}" Jul 7 06:13:15.134083 containerd[1540]: time="2025-07-07T06:13:15.134047240Z" level=info msg="StartContainer for \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" returns successfully" Jul 7 06:13:15.277445 containerd[1540]: time="2025-07-07T06:13:15.277018108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"167364e6a2e6144a319fb7ca54a9aea117c7ab9486ea5c5fe4be7b17fae5f07e\" pid:5493 exit_status:1 exited_at:{seconds:1751868795 nanos:276661859}" Jul 7 06:13:16.014796 kubelet[2715]: I0707 06:13:16.014663 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:16.023562 kubelet[2715]: I0707 06:13:16.023524 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f64c59f69-9hhmk" podStartSLOduration=27.5817048 podStartE2EDuration="32.023510796s" podCreationTimestamp="2025-07-07 06:12:44 +0000 UTC" firstStartedPulling="2025-07-07 06:13:10.445014186 +0000 UTC m=+39.832897186" lastFinishedPulling="2025-07-07 06:13:14.886820182 +0000 UTC m=+44.274703182" observedRunningTime="2025-07-07 06:13:16.022792268 +0000 UTC m=+45.410675278" watchObservedRunningTime="2025-07-07 06:13:16.023510796 +0000 UTC m=+45.411393796" Jul 7 06:13:17.017000 kubelet[2715]: I0707 06:13:17.016950 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:23.662719 kubelet[2715]: I0707 06:13:23.662465 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:27.555961 containerd[1540]: time="2025-07-07T06:13:27.555921747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" id:\"363f662722a3db3d7068e2efa98b0b667b34c3d3c415b17e85ef8ecd8a97c434\" pid:5547 exited_at:{seconds:1751868807 nanos:555628058}" Jul 7 06:13:27.826078 kubelet[2715]: I0707 06:13:27.825947 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:27.860144 containerd[1540]: time="2025-07-07T06:13:27.860096064Z" level=info msg="StopContainer for \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" with timeout 30 (s)" Jul 7 06:13:27.861224 containerd[1540]: time="2025-07-07T06:13:27.860695003Z" level=info msg="Stop container \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" with signal terminated" Jul 7 06:13:27.907935 systemd[1]: Created slice kubepods-besteffort-pod0dc82a2a_5968_4087_acfc_56f211b9e2f8.slice - libcontainer container kubepods-besteffort-pod0dc82a2a_5968_4087_acfc_56f211b9e2f8.slice. Jul 7 06:13:27.920652 systemd[1]: cri-containerd-28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358.scope: Deactivated successfully. Jul 7 06:13:27.921548 systemd[1]: cri-containerd-28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358.scope: Consumed 1.237s CPU time, 48.3M memory peak. Jul 7 06:13:27.924441 containerd[1540]: time="2025-07-07T06:13:27.924416386Z" level=info msg="received exit event container_id:\"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" id:\"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" pid:5450 exit_status:1 exited_at:{seconds:1751868807 nanos:924181556}" Jul 7 06:13:27.924760 containerd[1540]: time="2025-07-07T06:13:27.924619225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" id:\"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" pid:5450 exit_status:1 exited_at:{seconds:1751868807 nanos:924181556}" Jul 7 06:13:27.952366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358-rootfs.mount: Deactivated successfully. Jul 7 06:13:28.035069 containerd[1540]: time="2025-07-07T06:13:28.035037081Z" level=info msg="StopContainer for \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" returns successfully" Jul 7 06:13:28.035637 containerd[1540]: time="2025-07-07T06:13:28.035614470Z" level=info msg="StopPodSandbox for \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\"" Jul 7 06:13:28.035709 containerd[1540]: time="2025-07-07T06:13:28.035670190Z" level=info msg="Container to stop \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:13:28.044430 systemd[1]: cri-containerd-945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe.scope: Deactivated successfully. Jul 7 06:13:28.045124 containerd[1540]: time="2025-07-07T06:13:28.045099896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" id:\"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" pid:4957 exit_status:137 exited_at:{seconds:1751868808 nanos:44902827}" Jul 7 06:13:28.049593 kubelet[2715]: I0707 06:13:28.049568 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0dc82a2a-5968-4087-acfc-56f211b9e2f8-calico-apiserver-certs\") pod \"calico-apiserver-544ddc8dd6-gdlwn\" (UID: \"0dc82a2a-5968-4087-acfc-56f211b9e2f8\") " pod="calico-apiserver/calico-apiserver-544ddc8dd6-gdlwn" Jul 7 06:13:28.049821 kubelet[2715]: I0707 06:13:28.049793 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4vs\" (UniqueName: \"kubernetes.io/projected/0dc82a2a-5968-4087-acfc-56f211b9e2f8-kube-api-access-tq4vs\") pod \"calico-apiserver-544ddc8dd6-gdlwn\" (UID: \"0dc82a2a-5968-4087-acfc-56f211b9e2f8\") " pod="calico-apiserver/calico-apiserver-544ddc8dd6-gdlwn" Jul 7 06:13:28.065320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe-rootfs.mount: Deactivated successfully. Jul 7 06:13:28.073989 containerd[1540]: time="2025-07-07T06:13:28.073946405Z" level=info msg="received exit event sandbox_id:\"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" exit_status:137 exited_at:{seconds:1751868808 nanos:44902827}" Jul 7 06:13:28.077127 containerd[1540]: time="2025-07-07T06:13:28.076934171Z" level=info msg="shim disconnected" id=945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe namespace=k8s.io Jul 7 06:13:28.077127 containerd[1540]: time="2025-07-07T06:13:28.076956121Z" level=warning msg="cleaning up after shim disconnected" id=945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe namespace=k8s.io Jul 7 06:13:28.077127 containerd[1540]: time="2025-07-07T06:13:28.076963621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:13:28.078580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe-shm.mount: Deactivated successfully. Jul 7 06:13:28.129884 systemd-networkd[1466]: cali0e56d87daa0: Link DOWN Jul 7 06:13:28.131497 systemd-networkd[1466]: cali0e56d87daa0: Lost carrier Jul 7 06:13:28.215787 containerd[1540]: time="2025-07-07T06:13:28.215752693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-544ddc8dd6-gdlwn,Uid:0dc82a2a-5968-4087-acfc-56f211b9e2f8,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.128 [INFO][5625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.128 [INFO][5625] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" iface="eth0" netns="/var/run/netns/cni-39149124-7c28-cb28-ba41-fc4198e9c33b" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.129 [INFO][5625] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" iface="eth0" netns="/var/run/netns/cni-39149124-7c28-cb28-ba41-fc4198e9c33b" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.135 [INFO][5625] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" after=7.0227ms iface="eth0" netns="/var/run/netns/cni-39149124-7c28-cb28-ba41-fc4198e9c33b" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.136 [INFO][5625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.136 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.187 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.187 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.187 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.241 [INFO][5632] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.241 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.243 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:28.247631 containerd[1540]: 2025-07-07 06:13:28.245 [INFO][5625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:28.249426 containerd[1540]: time="2025-07-07T06:13:28.249043975Z" level=info msg="TearDown network for sandbox \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" successfully" Jul 7 06:13:28.249603 containerd[1540]: time="2025-07-07T06:13:28.249498834Z" level=info msg="StopPodSandbox for \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" returns successfully" Jul 7 06:13:28.368925 systemd-networkd[1466]: cali026e631e4b0: Link UP Jul 7 06:13:28.370881 systemd-networkd[1466]: cali026e631e4b0: Gained carrier Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.259 [INFO][5648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0 calico-apiserver-544ddc8dd6- calico-apiserver 0dc82a2a-5968-4087-acfc-56f211b9e2f8 1172 0 2025-07-07 06:13:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:544ddc8dd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-33 calico-apiserver-544ddc8dd6-gdlwn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali026e631e4b0 [] [] }} ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.259 [INFO][5648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.303 [INFO][5659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" HandleID="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Workload="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.303 [INFO][5659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" HandleID="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Workload="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-33", "pod":"calico-apiserver-544ddc8dd6-gdlwn", "timestamp":"2025-07-07 06:13:28.302317809 +0000 UTC"}, Hostname:"172-234-200-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.303 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.303 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.303 [INFO][5659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-33' Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.310 [INFO][5659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.342 [INFO][5659] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.346 [INFO][5659] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.348 [INFO][5659] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.351 [INFO][5659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.351 [INFO][5659] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.352 [INFO][5659] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2 Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.356 [INFO][5659] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.361 [INFO][5659] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.74/26] block=192.168.82.64/26 handle="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.361 [INFO][5659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.74/26] handle="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" host="172-234-200-33" Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.361 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:28.384020 containerd[1540]: 2025-07-07 06:13:28.361 [INFO][5659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.74/26] IPv6=[] ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" HandleID="k8s-pod-network.081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Workload="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" Jul 7 06:13:28.384592 containerd[1540]: 2025-07-07 06:13:28.364 [INFO][5648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0", GenerateName:"calico-apiserver-544ddc8dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0dc82a2a-5968-4087-acfc-56f211b9e2f8", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"544ddc8dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"", Pod:"calico-apiserver-544ddc8dd6-gdlwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali026e631e4b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:28.384592 containerd[1540]: 2025-07-07 06:13:28.364 [INFO][5648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.74/32] ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" Jul 7 06:13:28.384592 containerd[1540]: 2025-07-07 06:13:28.364 [INFO][5648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali026e631e4b0 ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" Jul 7 06:13:28.384592 containerd[1540]: 2025-07-07 06:13:28.369 [INFO][5648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" Jul 7 06:13:28.384592 containerd[1540]: 2025-07-07 06:13:28.369 [INFO][5648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0", GenerateName:"calico-apiserver-544ddc8dd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0dc82a2a-5968-4087-acfc-56f211b9e2f8", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"544ddc8dd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-33", ContainerID:"081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2", Pod:"calico-apiserver-544ddc8dd6-gdlwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali026e631e4b0", MAC:"22:2f:36:e3:1a:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:28.384592 containerd[1540]: 2025-07-07 06:13:28.380 [INFO][5648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" Namespace="calico-apiserver" Pod="calico-apiserver-544ddc8dd6-gdlwn" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--544ddc8dd6--gdlwn-eth0" Jul 7 06:13:28.415880 containerd[1540]: time="2025-07-07T06:13:28.415640757Z" level=info msg="connecting to shim 081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2" address="unix:///run/containerd/s/c0a606f709edb5f27fd40570e35f7ac462b5c7e78cf0e2c88240477b80c786cb" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:28.454079 kubelet[2715]: I0707 06:13:28.454044 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm82j\" (UniqueName: \"kubernetes.io/projected/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-kube-api-access-qm82j\") pod \"eea1d1cc-704d-4bb8-8684-4c28eeac74a0\" (UID: \"eea1d1cc-704d-4bb8-8684-4c28eeac74a0\") " Jul 7 06:13:28.454721 kubelet[2715]: I0707 06:13:28.454590 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-calico-apiserver-certs\") pod \"eea1d1cc-704d-4bb8-8684-4c28eeac74a0\" (UID: \"eea1d1cc-704d-4bb8-8684-4c28eeac74a0\") " Jul 7 06:13:28.455984 systemd[1]: Started cri-containerd-081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2.scope - libcontainer container 081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2. Jul 7 06:13:28.463129 kubelet[2715]: I0707 06:13:28.463065 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-kube-api-access-qm82j" (OuterVolumeSpecName: "kube-api-access-qm82j") pod "eea1d1cc-704d-4bb8-8684-4c28eeac74a0" (UID: "eea1d1cc-704d-4bb8-8684-4c28eeac74a0"). InnerVolumeSpecName "kube-api-access-qm82j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:13:28.466870 kubelet[2715]: I0707 06:13:28.466814 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "eea1d1cc-704d-4bb8-8684-4c28eeac74a0" (UID: "eea1d1cc-704d-4bb8-8684-4c28eeac74a0"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:13:28.556026 kubelet[2715]: I0707 06:13:28.555999 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm82j\" (UniqueName: \"kubernetes.io/projected/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-kube-api-access-qm82j\") on node \"172-234-200-33\" DevicePath \"\"" Jul 7 06:13:28.556026 kubelet[2715]: I0707 06:13:28.556024 2715 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eea1d1cc-704d-4bb8-8684-4c28eeac74a0-calico-apiserver-certs\") on node \"172-234-200-33\" DevicePath \"\"" Jul 7 06:13:28.612362 containerd[1540]: time="2025-07-07T06:13:28.612303067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-544ddc8dd6-gdlwn,Uid:0dc82a2a-5968-4087-acfc-56f211b9e2f8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2\"" Jul 7 06:13:28.616366 containerd[1540]: time="2025-07-07T06:13:28.616318941Z" level=info msg="CreateContainer within sandbox \"081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:13:28.624595 containerd[1540]: time="2025-07-07T06:13:28.624477539Z" level=info msg="Container 4c04e0525db4d9e94e7d48d82ae0379b2f81010d9758d8036afac12b2e44898b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:28.631047 containerd[1540]: time="2025-07-07T06:13:28.631011070Z" level=info msg="CreateContainer within sandbox \"081ce4e4184a8e160ae6d3b63dbb965fd6d7c4bc1e56105004d4a14829fa85d2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c04e0525db4d9e94e7d48d82ae0379b2f81010d9758d8036afac12b2e44898b\"" Jul 7 06:13:28.631821 containerd[1540]: time="2025-07-07T06:13:28.631638389Z" level=info msg="StartContainer for \"4c04e0525db4d9e94e7d48d82ae0379b2f81010d9758d8036afac12b2e44898b\"" Jul 7 06:13:28.632940 containerd[1540]: time="2025-07-07T06:13:28.632921227Z" level=info msg="connecting to shim 4c04e0525db4d9e94e7d48d82ae0379b2f81010d9758d8036afac12b2e44898b" address="unix:///run/containerd/s/c0a606f709edb5f27fd40570e35f7ac462b5c7e78cf0e2c88240477b80c786cb" protocol=ttrpc version=3 Jul 7 06:13:28.659025 systemd[1]: Started cri-containerd-4c04e0525db4d9e94e7d48d82ae0379b2f81010d9758d8036afac12b2e44898b.scope - libcontainer container 4c04e0525db4d9e94e7d48d82ae0379b2f81010d9758d8036afac12b2e44898b. Jul 7 06:13:28.721204 containerd[1540]: time="2025-07-07T06:13:28.721143551Z" level=info msg="StartContainer for \"4c04e0525db4d9e94e7d48d82ae0379b2f81010d9758d8036afac12b2e44898b\" returns successfully" Jul 7 06:13:28.752761 systemd[1]: Removed slice kubepods-besteffort-podeea1d1cc_704d_4bb8_8684_4c28eeac74a0.slice - libcontainer container kubepods-besteffort-podeea1d1cc_704d_4bb8_8684_4c28eeac74a0.slice. Jul 7 06:13:28.753311 systemd[1]: kubepods-besteffort-podeea1d1cc_704d_4bb8_8684_4c28eeac74a0.slice: Consumed 1.268s CPU time, 48.6M memory peak. Jul 7 06:13:28.956145 systemd[1]: run-netns-cni\x2d39149124\x2d7c28\x2dcb28\x2dba41\x2dfc4198e9c33b.mount: Deactivated successfully. Jul 7 06:13:28.956256 systemd[1]: var-lib-kubelet-pods-eea1d1cc\x2d704d\x2d4bb8\x2d8684\x2d4c28eeac74a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqm82j.mount: Deactivated successfully. Jul 7 06:13:28.956329 systemd[1]: var-lib-kubelet-pods-eea1d1cc\x2d704d\x2d4bb8\x2d8684\x2d4c28eeac74a0-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 06:13:29.050560 kubelet[2715]: I0707 06:13:29.050035 2715 scope.go:117] "RemoveContainer" containerID="28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358" Jul 7 06:13:29.056029 containerd[1540]: time="2025-07-07T06:13:29.055988198Z" level=info msg="RemoveContainer for \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\"" Jul 7 06:13:29.078331 kubelet[2715]: I0707 06:13:29.078279 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-544ddc8dd6-gdlwn" podStartSLOduration=2.078263468 podStartE2EDuration="2.078263468s" podCreationTimestamp="2025-07-07 06:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:13:29.077573569 +0000 UTC m=+58.465456569" watchObservedRunningTime="2025-07-07 06:13:29.078263468 +0000 UTC m=+58.466146488" Jul 7 06:13:29.187674 containerd[1540]: time="2025-07-07T06:13:29.187602052Z" level=info msg="RemoveContainer for \"28b6aa92d9a35513a2c88e997abf9011642a2327ccdfab07d62fba79ac217358\" returns successfully" Jul 7 06:13:29.471407 systemd-networkd[1466]: cali026e631e4b0: Gained IPv6LL Jul 7 06:13:30.054674 kubelet[2715]: I0707 06:13:30.054625 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:30.742444 containerd[1540]: time="2025-07-07T06:13:30.742084903Z" level=info msg="StopPodSandbox for \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\"" Jul 7 06:13:30.756544 kubelet[2715]: I0707 06:13:30.756385 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea1d1cc-704d-4bb8-8684-4c28eeac74a0" path="/var/lib/kubelet/pods/eea1d1cc-704d-4bb8-8684-4c28eeac74a0/volumes" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.784 [WARNING][5768] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.785 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.785 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" iface="eth0" netns="" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.785 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.785 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.814 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.815 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.816 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.831 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.831 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.834 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:30.838567 containerd[1540]: 2025-07-07 06:13:30.836 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.839207 containerd[1540]: time="2025-07-07T06:13:30.838915172Z" level=info msg="TearDown network for sandbox \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" successfully" Jul 7 06:13:30.839207 containerd[1540]: time="2025-07-07T06:13:30.838934382Z" level=info msg="StopPodSandbox for \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" returns successfully" Jul 7 06:13:30.839536 containerd[1540]: time="2025-07-07T06:13:30.839464241Z" level=info msg="RemovePodSandbox for \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\"" Jul 7 06:13:30.839536 containerd[1540]: time="2025-07-07T06:13:30.839497451Z" level=info msg="Forcibly stopping sandbox \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\"" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.867 [WARNING][5792] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.867 [INFO][5792] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.867 [INFO][5792] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" iface="eth0" netns="" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.867 [INFO][5792] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.867 [INFO][5792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.891 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.891 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.891 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.896 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.896 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" HandleID="k8s-pod-network.945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--9hhmk-eth0" Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.897 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:30.901909 containerd[1540]: 2025-07-07 06:13:30.899 [INFO][5792] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe" Jul 7 06:13:30.901909 containerd[1540]: time="2025-07-07T06:13:30.901520863Z" level=info msg="TearDown network for sandbox \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" successfully" Jul 7 06:13:30.903777 containerd[1540]: time="2025-07-07T06:13:30.903753290Z" level=info msg="Ensure that sandbox 945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe in task-service has been cleanup successfully" Jul 7 06:13:30.905950 containerd[1540]: time="2025-07-07T06:13:30.905918487Z" level=info msg="RemovePodSandbox \"945d61e811465c80291ecc5306a698cb2e21a06b4f42ba9ebe26d4d8d6077efe\" returns successfully" Jul 7 06:13:43.893290 containerd[1540]: time="2025-07-07T06:13:43.893257098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"883b9f624f7b0d60bbb42c9e7d662bbaf2324b5589419602c34d1daa880b7c3c\" pid:5835 exited_at:{seconds:1751868823 nanos:893010658}" Jul 7 06:13:44.746368 kubelet[2715]: E0707 06:13:44.746327 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:44.983133 containerd[1540]: time="2025-07-07T06:13:44.982806030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"01854fc655c413a1f4d11c375f239d69ce09d63a26ca013cce92f036e452c89c\" pid:5856 exited_at:{seconds:1751868824 nanos:982197070}" Jul 7 06:13:57.590222 containerd[1540]: time="2025-07-07T06:13:57.590173399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" id:\"535a2e31df4fc25f76e17da53b2c71d4fe99fa893e847b62f38d371f4bf52b85\" pid:5889 exited_at:{seconds:1751868837 nanos:589020493}" Jul 7 06:13:57.746341 kubelet[2715]: E0707 06:13:57.746306 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:13:57.747276 kubelet[2715]: E0707 06:13:57.747246 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:14:03.747290 kubelet[2715]: E0707 06:14:03.746371 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:14:06.281811 containerd[1540]: time="2025-07-07T06:14:06.281771592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"11b8958f4aec38e454ae13dd1e1470f9505316713a1eb1df97df737b7d94baa8\" pid:5913 exited_at:{seconds:1751868846 nanos:280906647}" Jul 7 06:14:06.840592 containerd[1540]: time="2025-07-07T06:14:06.840553736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"1a0e7f97ff18191d182c9f9f380b1fb312ed7ec9c3acb014f777761aaf4b8e04\" pid:5936 exited_at:{seconds:1751868846 nanos:840038407}" Jul 7 06:14:13.886461 containerd[1540]: time="2025-07-07T06:14:13.886420902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"32ac3aba07842bacea9f6a7efdfe4d4c63a28dd5ea91c6a2f7d9449a27d75480\" pid:5961 exited_at:{seconds:1751868853 nanos:886228400}" Jul 7 06:14:14.746898 kubelet[2715]: E0707 06:14:14.746304 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:14:14.959098 containerd[1540]: time="2025-07-07T06:14:14.959056239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"b707ebe167bc69f2ebb5af320d681df81cd5a1d9f59ee51694b006e1bddccfbc\" pid:5982 exited_at:{seconds:1751868854 nanos:958562382}" Jul 7 06:14:22.746419 kubelet[2715]: E0707 06:14:22.745652 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:14:23.125387 kubelet[2715]: I0707 06:14:23.125182 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:14:23.169382 containerd[1540]: time="2025-07-07T06:14:23.169341706Z" level=info msg="StopContainer for \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" with timeout 30 (s)" Jul 7 06:14:23.171110 containerd[1540]: time="2025-07-07T06:14:23.169970674Z" level=info msg="Stop container \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" with signal terminated" Jul 7 06:14:23.207266 systemd[1]: cri-containerd-fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03.scope: Deactivated successfully. Jul 7 06:14:23.215917 containerd[1540]: time="2025-07-07T06:14:23.215765543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" id:\"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" pid:5282 exit_status:1 exited_at:{seconds:1751868863 nanos:214767351}" Jul 7 06:14:23.215917 containerd[1540]: time="2025-07-07T06:14:23.215878784Z" level=info msg="received exit event container_id:\"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" id:\"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" pid:5282 exit_status:1 exited_at:{seconds:1751868863 nanos:214767351}" Jul 7 06:14:23.244258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03-rootfs.mount: Deactivated successfully. Jul 7 06:14:23.263145 containerd[1540]: time="2025-07-07T06:14:23.263108770Z" level=info msg="StopContainer for \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" returns successfully" Jul 7 06:14:23.263571 containerd[1540]: time="2025-07-07T06:14:23.263537805Z" level=info msg="StopPodSandbox for \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\"" Jul 7 06:14:23.263615 containerd[1540]: time="2025-07-07T06:14:23.263587935Z" level=info msg="Container to stop \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:14:23.269767 systemd[1]: cri-containerd-8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d.scope: Deactivated successfully. Jul 7 06:14:23.276472 containerd[1540]: time="2025-07-07T06:14:23.276370076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" id:\"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" pid:4649 exit_status:137 exited_at:{seconds:1751868863 nanos:276089602}" Jul 7 06:14:23.304319 containerd[1540]: time="2025-07-07T06:14:23.304275264Z" level=info msg="shim disconnected" id=8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d namespace=k8s.io Jul 7 06:14:23.304319 containerd[1540]: time="2025-07-07T06:14:23.304303424Z" level=warning msg="cleaning up after shim disconnected" id=8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d namespace=k8s.io Jul 7 06:14:23.304401 containerd[1540]: time="2025-07-07T06:14:23.304311404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:14:23.305352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d-rootfs.mount: Deactivated successfully. Jul 7 06:14:23.338676 containerd[1540]: time="2025-07-07T06:14:23.337333433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" id:\"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" pid:4649 exit_status:137 exited_at:{seconds:1751868863 nanos:330120678}" Jul 7 06:14:23.338676 containerd[1540]: time="2025-07-07T06:14:23.338045661Z" level=info msg="received exit event sandbox_id:\"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" exit_status:137 exited_at:{seconds:1751868863 nanos:330120678}" Jul 7 06:14:23.343755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d-shm.mount: Deactivated successfully. Jul 7 06:14:23.391969 systemd-networkd[1466]: cali58f4694e563: Link DOWN Jul 7 06:14:23.391977 systemd-networkd[1466]: cali58f4694e563: Lost carrier Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.389 [INFO][6065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.390 [INFO][6065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" iface="eth0" netns="/var/run/netns/cni-b90c6d08-1c6a-f9b8-3f04-2d92880a0caa" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.390 [INFO][6065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" iface="eth0" netns="/var/run/netns/cni-b90c6d08-1c6a-f9b8-3f04-2d92880a0caa" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.399 [INFO][6065] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" after=8.785573ms iface="eth0" netns="/var/run/netns/cni-b90c6d08-1c6a-f9b8-3f04-2d92880a0caa" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.399 [INFO][6065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.399 [INFO][6065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.438 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.439 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.439 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.484 [INFO][6072] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.484 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.486 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:14:23.492025 containerd[1540]: 2025-07-07 06:14:23.489 [INFO][6065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:23.497491 containerd[1540]: time="2025-07-07T06:14:23.497464127Z" level=info msg="TearDown network for sandbox \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" successfully" Jul 7 06:14:23.497616 containerd[1540]: time="2025-07-07T06:14:23.497573809Z" level=info msg="StopPodSandbox for \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" returns successfully" Jul 7 06:14:23.498992 systemd[1]: run-netns-cni\x2db90c6d08\x2d1c6a\x2df9b8\x2d3f04\x2d2d92880a0caa.mount: Deactivated successfully. Jul 7 06:14:23.581809 kubelet[2715]: I0707 06:14:23.581524 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-calico-apiserver-certs\") pod \"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4\" (UID: \"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4\") " Jul 7 06:14:23.581809 kubelet[2715]: I0707 06:14:23.581588 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzcn2\" (UniqueName: \"kubernetes.io/projected/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-kube-api-access-lzcn2\") pod \"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4\" (UID: \"94fe1b3f-505a-4c7f-bccc-eff5407fbbb4\") " Jul 7 06:14:23.587972 kubelet[2715]: I0707 06:14:23.587931 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "94fe1b3f-505a-4c7f-bccc-eff5407fbbb4" (UID: "94fe1b3f-505a-4c7f-bccc-eff5407fbbb4"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:14:23.588865 kubelet[2715]: I0707 06:14:23.588047 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-kube-api-access-lzcn2" (OuterVolumeSpecName: "kube-api-access-lzcn2") pod "94fe1b3f-505a-4c7f-bccc-eff5407fbbb4" (UID: "94fe1b3f-505a-4c7f-bccc-eff5407fbbb4"). InnerVolumeSpecName "kube-api-access-lzcn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:14:23.589565 systemd[1]: var-lib-kubelet-pods-94fe1b3f\x2d505a\x2d4c7f\x2dbccc\x2deff5407fbbb4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlzcn2.mount: Deactivated successfully. Jul 7 06:14:23.681970 kubelet[2715]: I0707 06:14:23.681939 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzcn2\" (UniqueName: \"kubernetes.io/projected/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-kube-api-access-lzcn2\") on node \"172-234-200-33\" DevicePath \"\"" Jul 7 06:14:23.681970 kubelet[2715]: I0707 06:14:23.681962 2715 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4-calico-apiserver-certs\") on node \"172-234-200-33\" DevicePath \"\"" Jul 7 06:14:24.193742 kubelet[2715]: I0707 06:14:24.193711 2715 scope.go:117] "RemoveContainer" containerID="fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03" Jul 7 06:14:24.195913 containerd[1540]: time="2025-07-07T06:14:24.195799512Z" level=info msg="RemoveContainer for \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\"" Jul 7 06:14:24.200854 systemd[1]: Removed slice kubepods-besteffort-pod94fe1b3f_505a_4c7f_bccc_eff5407fbbb4.slice - libcontainer container kubepods-besteffort-pod94fe1b3f_505a_4c7f_bccc_eff5407fbbb4.slice. Jul 7 06:14:24.202317 containerd[1540]: time="2025-07-07T06:14:24.202269007Z" level=info msg="RemoveContainer for \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" returns successfully" Jul 7 06:14:24.202531 kubelet[2715]: I0707 06:14:24.202511 2715 scope.go:117] "RemoveContainer" containerID="fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03" Jul 7 06:14:24.202714 containerd[1540]: time="2025-07-07T06:14:24.202690071Z" level=error msg="ContainerStatus for \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\": not found" Jul 7 06:14:24.203009 kubelet[2715]: E0707 06:14:24.202914 2715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\": not found" containerID="fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03" Jul 7 06:14:24.203009 kubelet[2715]: I0707 06:14:24.202941 2715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03"} err="failed to get container status \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa2f2424a0c4462112abd2a5eb86f18eace52a4391f0e106a5f7923022298e03\": not found" Jul 7 06:14:24.244141 systemd[1]: var-lib-kubelet-pods-94fe1b3f\x2d505a\x2d4c7f\x2dbccc\x2deff5407fbbb4-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 06:14:24.475661 containerd[1540]: time="2025-07-07T06:14:24.475538239Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1751868863 nanos:276089602}" Jul 7 06:14:24.749335 kubelet[2715]: I0707 06:14:24.749214 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fe1b3f-505a-4c7f-bccc-eff5407fbbb4" path="/var/lib/kubelet/pods/94fe1b3f-505a-4c7f-bccc-eff5407fbbb4/volumes" Jul 7 06:14:27.693711 containerd[1540]: time="2025-07-07T06:14:27.693660373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" id:\"2867be826e434e1b47164ead1931ac91f5744a0de4d964ffd87bcfcad7d0b723\" pid:6104 exited_at:{seconds:1751868867 nanos:693264349}" Jul 7 06:14:30.909013 containerd[1540]: time="2025-07-07T06:14:30.908971690Z" level=info msg="StopPodSandbox for \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\"" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.940 [WARNING][6127] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.941 [INFO][6127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.941 [INFO][6127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" iface="eth0" netns="" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.941 [INFO][6127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.941 [INFO][6127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.961 [INFO][6134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.961 [INFO][6134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.961 [INFO][6134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.967 [WARNING][6134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.967 [INFO][6134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.968 [INFO][6134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:14:30.973223 containerd[1540]: 2025-07-07 06:14:30.971 [INFO][6127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:30.973727 containerd[1540]: time="2025-07-07T06:14:30.973297047Z" level=info msg="TearDown network for sandbox \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" successfully" Jul 7 06:14:30.973727 containerd[1540]: time="2025-07-07T06:14:30.973321257Z" level=info msg="StopPodSandbox for \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" returns successfully" Jul 7 06:14:30.974288 containerd[1540]: time="2025-07-07T06:14:30.973926873Z" level=info msg="RemovePodSandbox for \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\"" Jul 7 06:14:30.974288 containerd[1540]: time="2025-07-07T06:14:30.973951203Z" level=info msg="Forcibly stopping sandbox \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\"" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.003 [WARNING][6148] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" WorkloadEndpoint="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.003 [INFO][6148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.003 [INFO][6148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" iface="eth0" netns="" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.003 [INFO][6148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.003 [INFO][6148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.020 [INFO][6156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.020 [INFO][6156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.020 [INFO][6156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.025 [WARNING][6156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.025 [INFO][6156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" HandleID="k8s-pod-network.8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Workload="172--234--200--33-k8s-calico--apiserver--f64c59f69--rg5ps-eth0" Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.026 [INFO][6156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:14:31.030429 containerd[1540]: 2025-07-07 06:14:31.028 [INFO][6148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d" Jul 7 06:14:31.030869 containerd[1540]: time="2025-07-07T06:14:31.030816229Z" level=info msg="TearDown network for sandbox \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" successfully" Jul 7 06:14:31.033003 containerd[1540]: time="2025-07-07T06:14:31.032977220Z" level=info msg="Ensure that sandbox 8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d in task-service has been cleanup successfully" Jul 7 06:14:31.035160 containerd[1540]: time="2025-07-07T06:14:31.035141911Z" level=info msg="RemovePodSandbox \"8a1abecb6f4597c41163000b1761c73990a7cb88d1e4991ae5945e301cee7f3d\" returns successfully" Jul 7 06:14:40.746415 kubelet[2715]: E0707 06:14:40.746004 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:14:43.887889 containerd[1540]: time="2025-07-07T06:14:43.887842167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"7b6f9c2e48395a025018a0e392b20619a34cffc4ea9ff2a3d0fbfb265fe40f7c\" pid:6185 exited_at:{seconds:1751868883 nanos:887571435}" Jul 7 06:14:44.953622 containerd[1540]: time="2025-07-07T06:14:44.953539953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"a495f8628210f19a51fa3875481ca6e11a6260acce0c2cf23b8f8ee809660b53\" pid:6206 exited_at:{seconds:1751868884 nanos:953147780}" Jul 7 06:14:49.207781 systemd[1]: Started sshd@8-172.234.200.33:22-147.75.109.163:52048.service - OpenSSH per-connection server daemon (147.75.109.163:52048). Jul 7 06:14:49.593993 sshd[6240]: Accepted publickey for core from 147.75.109.163 port 52048 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:14:49.596968 sshd-session[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:49.602593 systemd-logind[1514]: New session 8 of user core. Jul 7 06:14:49.611012 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:14:49.918681 sshd[6242]: Connection closed by 147.75.109.163 port 52048 Jul 7 06:14:49.920083 sshd-session[6240]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:49.924882 systemd[1]: sshd@8-172.234.200.33:22-147.75.109.163:52048.service: Deactivated successfully. Jul 7 06:14:49.928345 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:14:49.930296 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:14:49.932593 systemd-logind[1514]: Removed session 8. Jul 7 06:14:54.976962 systemd[1]: Started sshd@9-172.234.200.33:22-147.75.109.163:52060.service - OpenSSH per-connection server daemon (147.75.109.163:52060). Jul 7 06:14:55.311155 sshd[6256]: Accepted publickey for core from 147.75.109.163 port 52060 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:14:55.313251 sshd-session[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:55.318403 systemd-logind[1514]: New session 9 of user core. Jul 7 06:14:55.322966 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:14:55.617094 sshd[6258]: Connection closed by 147.75.109.163 port 52060 Jul 7 06:14:55.617639 sshd-session[6256]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:55.626620 systemd[1]: sshd@9-172.234.200.33:22-147.75.109.163:52060.service: Deactivated successfully. Jul 7 06:14:55.630019 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:14:55.632014 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:14:55.635020 systemd-logind[1514]: Removed session 9. Jul 7 06:14:57.551646 containerd[1540]: time="2025-07-07T06:14:57.551597775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" id:\"33e05a1be889516063308f4759b94b18c69ca3ef8aa5cb2fc24a3c659f4498e9\" pid:6282 exited_at:{seconds:1751868897 nanos:551284373}" Jul 7 06:15:00.678647 systemd[1]: Started sshd@10-172.234.200.33:22-147.75.109.163:56256.service - OpenSSH per-connection server daemon (147.75.109.163:56256). Jul 7 06:15:01.018084 sshd[6300]: Accepted publickey for core from 147.75.109.163 port 56256 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:01.019554 sshd-session[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:01.024468 systemd-logind[1514]: New session 10 of user core. Jul 7 06:15:01.029955 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:15:01.319017 sshd[6302]: Connection closed by 147.75.109.163 port 56256 Jul 7 06:15:01.320066 sshd-session[6300]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:01.324558 systemd[1]: sshd@10-172.234.200.33:22-147.75.109.163:56256.service: Deactivated successfully. Jul 7 06:15:01.326792 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:15:01.328265 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:15:01.330425 systemd-logind[1514]: Removed session 10. Jul 7 06:15:01.378016 systemd[1]: Started sshd@11-172.234.200.33:22-147.75.109.163:56258.service - OpenSSH per-connection server daemon (147.75.109.163:56258). Jul 7 06:15:01.716864 sshd[6315]: Accepted publickey for core from 147.75.109.163 port 56258 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:01.717881 sshd-session[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:01.722899 systemd-logind[1514]: New session 11 of user core. Jul 7 06:15:01.726954 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:15:02.043477 sshd[6317]: Connection closed by 147.75.109.163 port 56258 Jul 7 06:15:02.044724 sshd-session[6315]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:02.048755 systemd[1]: sshd@11-172.234.200.33:22-147.75.109.163:56258.service: Deactivated successfully. Jul 7 06:15:02.051203 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:15:02.052396 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:15:02.053808 systemd-logind[1514]: Removed session 11. Jul 7 06:15:02.105056 systemd[1]: Started sshd@12-172.234.200.33:22-147.75.109.163:56270.service - OpenSSH per-connection server daemon (147.75.109.163:56270). Jul 7 06:15:02.453160 sshd[6326]: Accepted publickey for core from 147.75.109.163 port 56270 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:02.455129 sshd-session[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:02.462364 systemd-logind[1514]: New session 12 of user core. Jul 7 06:15:02.472947 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:15:02.753053 sshd[6328]: Connection closed by 147.75.109.163 port 56270 Jul 7 06:15:02.754149 sshd-session[6326]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:02.758748 systemd[1]: sshd@12-172.234.200.33:22-147.75.109.163:56270.service: Deactivated successfully. Jul 7 06:15:02.760938 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:15:02.762424 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:15:02.764023 systemd-logind[1514]: Removed session 12. Jul 7 06:15:03.746286 kubelet[2715]: E0707 06:15:03.746245 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:15:06.266437 containerd[1540]: time="2025-07-07T06:15:06.266398776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"6513888383221e04e7d7dd2487c0c3f7960d13c74bbd75f6a90f5c509dd1f81d\" pid:6354 exited_at:{seconds:1751868906 nanos:266070964}" Jul 7 06:15:06.729371 containerd[1540]: time="2025-07-07T06:15:06.729328378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"5777a8a8d7a1976d3042ab5ab24476dfe052cf2e52c00f204e3b6989430ef90d\" pid:6377 exited_at:{seconds:1751868906 nanos:728869925}" Jul 7 06:15:07.819407 systemd[1]: Started sshd@13-172.234.200.33:22-147.75.109.163:59304.service - OpenSSH per-connection server daemon (147.75.109.163:59304). Jul 7 06:15:08.164709 sshd[6389]: Accepted publickey for core from 147.75.109.163 port 59304 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:08.166541 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:08.172336 systemd-logind[1514]: New session 13 of user core. Jul 7 06:15:08.178987 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:15:08.482269 sshd[6391]: Connection closed by 147.75.109.163 port 59304 Jul 7 06:15:08.482418 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:08.487637 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:15:08.488354 systemd[1]: sshd@13-172.234.200.33:22-147.75.109.163:59304.service: Deactivated successfully. Jul 7 06:15:08.491425 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:15:08.492961 systemd-logind[1514]: Removed session 13. Jul 7 06:15:08.544678 systemd[1]: Started sshd@14-172.234.200.33:22-147.75.109.163:59306.service - OpenSSH per-connection server daemon (147.75.109.163:59306). Jul 7 06:15:08.894291 sshd[6403]: Accepted publickey for core from 147.75.109.163 port 59306 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:08.895715 sshd-session[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:08.900760 systemd-logind[1514]: New session 14 of user core. Jul 7 06:15:08.904010 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:15:09.350569 sshd[6405]: Connection closed by 147.75.109.163 port 59306 Jul 7 06:15:09.352138 sshd-session[6403]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:09.356532 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:15:09.357203 systemd[1]: sshd@14-172.234.200.33:22-147.75.109.163:59306.service: Deactivated successfully. Jul 7 06:15:09.359569 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:15:09.361981 systemd-logind[1514]: Removed session 14. Jul 7 06:15:09.409905 systemd[1]: Started sshd@15-172.234.200.33:22-147.75.109.163:59316.service - OpenSSH per-connection server daemon (147.75.109.163:59316). Jul 7 06:15:09.749867 sshd[6414]: Accepted publickey for core from 147.75.109.163 port 59316 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:09.751155 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:09.756431 systemd-logind[1514]: New session 15 of user core. Jul 7 06:15:09.760955 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:15:11.593744 sshd[6416]: Connection closed by 147.75.109.163 port 59316 Jul 7 06:15:11.595856 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:11.600874 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:15:11.601496 systemd[1]: sshd@15-172.234.200.33:22-147.75.109.163:59316.service: Deactivated successfully. Jul 7 06:15:11.604559 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:15:11.605290 systemd[1]: session-15.scope: Consumed 537ms CPU time, 72.5M memory peak. Jul 7 06:15:11.607361 systemd-logind[1514]: Removed session 15. Jul 7 06:15:11.661003 systemd[1]: Started sshd@16-172.234.200.33:22-147.75.109.163:59318.service - OpenSSH per-connection server daemon (147.75.109.163:59318). Jul 7 06:15:12.009497 sshd[6434]: Accepted publickey for core from 147.75.109.163 port 59318 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:12.011050 sshd-session[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:12.016795 systemd-logind[1514]: New session 16 of user core. Jul 7 06:15:12.021943 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:15:12.413950 sshd[6436]: Connection closed by 147.75.109.163 port 59318 Jul 7 06:15:12.414485 sshd-session[6434]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:12.418324 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:15:12.419027 systemd[1]: sshd@16-172.234.200.33:22-147.75.109.163:59318.service: Deactivated successfully. Jul 7 06:15:12.421130 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:15:12.423238 systemd-logind[1514]: Removed session 16. Jul 7 06:15:12.476012 systemd[1]: Started sshd@17-172.234.200.33:22-147.75.109.163:59328.service - OpenSSH per-connection server daemon (147.75.109.163:59328). Jul 7 06:15:12.746617 kubelet[2715]: E0707 06:15:12.746085 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:15:12.805665 sshd[6446]: Accepted publickey for core from 147.75.109.163 port 59328 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:12.806250 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:12.815238 systemd-logind[1514]: New session 17 of user core. Jul 7 06:15:12.821953 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:15:13.169378 sshd[6448]: Connection closed by 147.75.109.163 port 59328 Jul 7 06:15:13.170105 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:13.174355 systemd[1]: sshd@17-172.234.200.33:22-147.75.109.163:59328.service: Deactivated successfully. Jul 7 06:15:13.177487 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:15:13.178432 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:15:13.180560 systemd-logind[1514]: Removed session 17. Jul 7 06:15:13.897280 containerd[1540]: time="2025-07-07T06:15:13.897232965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"71c720ac6602e102a17da79158ff8a8944d9430e2ba99c26192aaf453c14e7a3\" pid:6472 exited_at:{seconds:1751868913 nanos:897045344}" Jul 7 06:15:14.422755 update_engine[1517]: I20250707 06:15:14.422501 1517 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 06:15:14.422755 update_engine[1517]: I20250707 06:15:14.422736 1517 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 06:15:14.423209 update_engine[1517]: I20250707 06:15:14.422982 1517 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 06:15:14.424691 update_engine[1517]: I20250707 06:15:14.424638 1517 omaha_request_params.cc:62] Current group set to alpha Jul 7 06:15:14.424883 update_engine[1517]: I20250707 06:15:14.424778 1517 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 06:15:14.424883 update_engine[1517]: I20250707 06:15:14.424799 1517 update_attempter.cc:643] Scheduling an action processor start. Jul 7 06:15:14.424883 update_engine[1517]: I20250707 06:15:14.424822 1517 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:15:14.425438 update_engine[1517]: I20250707 06:15:14.425083 1517 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 06:15:14.425438 update_engine[1517]: I20250707 06:15:14.425163 1517 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:15:14.425438 update_engine[1517]: I20250707 06:15:14.425173 1517 omaha_request_action.cc:272] Request: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: Jul 7 06:15:14.425438 update_engine[1517]: I20250707 06:15:14.425181 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:15:14.429268 locksmithd[1555]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 06:15:14.431374 update_engine[1517]: I20250707 06:15:14.431339 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:15:14.431969 update_engine[1517]: I20250707 06:15:14.431931 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:15:14.484775 update_engine[1517]: E20250707 06:15:14.484714 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:15:14.484898 update_engine[1517]: I20250707 06:15:14.484817 1517 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 06:15:14.963598 containerd[1540]: time="2025-07-07T06:15:14.963546411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"caa1642c8bf175064e0e4d44efc7feea83ba9b5e941acc30f6edff1fb0d4282e\" pid:6494 exited_at:{seconds:1751868914 nanos:963330930}" Jul 7 06:15:16.746854 kubelet[2715]: E0707 06:15:16.746191 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:15:16.747288 kubelet[2715]: E0707 06:15:16.746912 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:15:18.233909 systemd[1]: Started sshd@18-172.234.200.33:22-147.75.109.163:43660.service - OpenSSH per-connection server daemon (147.75.109.163:43660). Jul 7 06:15:18.568374 sshd[6508]: Accepted publickey for core from 147.75.109.163 port 43660 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:18.570133 sshd-session[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:18.576738 systemd-logind[1514]: New session 18 of user core. Jul 7 06:15:18.582959 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:15:18.876573 sshd[6510]: Connection closed by 147.75.109.163 port 43660 Jul 7 06:15:18.876761 sshd-session[6508]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:18.881843 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:15:18.883818 systemd[1]: sshd@18-172.234.200.33:22-147.75.109.163:43660.service: Deactivated successfully. Jul 7 06:15:18.887636 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:15:18.891703 systemd-logind[1514]: Removed session 18. Jul 7 06:15:19.746551 kubelet[2715]: E0707 06:15:19.746213 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:15:23.940688 systemd[1]: Started sshd@19-172.234.200.33:22-147.75.109.163:43676.service - OpenSSH per-connection server daemon (147.75.109.163:43676). Jul 7 06:15:24.288443 sshd[6523]: Accepted publickey for core from 147.75.109.163 port 43676 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:24.290109 sshd-session[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:24.295402 systemd-logind[1514]: New session 19 of user core. Jul 7 06:15:24.301005 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:15:24.421947 update_engine[1517]: I20250707 06:15:24.421867 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:15:24.422340 update_engine[1517]: I20250707 06:15:24.422139 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:15:24.422863 update_engine[1517]: I20250707 06:15:24.422382 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:15:24.424240 update_engine[1517]: E20250707 06:15:24.424202 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:15:24.424313 update_engine[1517]: I20250707 06:15:24.424255 1517 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 06:15:24.618871 sshd[6525]: Connection closed by 147.75.109.163 port 43676 Jul 7 06:15:24.620332 sshd-session[6523]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:24.624412 systemd[1]: sshd@19-172.234.200.33:22-147.75.109.163:43676.service: Deactivated successfully. Jul 7 06:15:24.625881 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:15:24.627934 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:15:24.632520 systemd-logind[1514]: Removed session 19. Jul 7 06:15:25.746014 kubelet[2715]: E0707 06:15:25.745980 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Jul 7 06:15:27.559648 containerd[1540]: time="2025-07-07T06:15:27.559606520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a517e62b763c3475832064ac7c7ae4b8fbcf7d330acf96db32514f6525828d44\" id:\"ac9e4e5b280fbeceb5e380bc23f9732ba11a8094a386eb3ba60e112c84eed781\" pid:6549 exited_at:{seconds:1751868927 nanos:559359409}" Jul 7 06:15:29.680735 systemd[1]: Started sshd@20-172.234.200.33:22-147.75.109.163:50192.service - OpenSSH per-connection server daemon (147.75.109.163:50192). Jul 7 06:15:30.036435 sshd[6561]: Accepted publickey for core from 147.75.109.163 port 50192 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:30.038331 sshd-session[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:30.044349 systemd-logind[1514]: New session 20 of user core. Jul 7 06:15:30.053974 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:15:30.351071 sshd[6563]: Connection closed by 147.75.109.163 port 50192 Jul 7 06:15:30.352068 sshd-session[6561]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:30.356260 systemd[1]: sshd@20-172.234.200.33:22-147.75.109.163:50192.service: Deactivated successfully. Jul 7 06:15:30.358576 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:15:30.359422 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:15:30.361375 systemd-logind[1514]: Removed session 20. Jul 7 06:15:34.425340 update_engine[1517]: I20250707 06:15:34.425249 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:15:34.425723 update_engine[1517]: I20250707 06:15:34.425557 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:15:34.425889 update_engine[1517]: I20250707 06:15:34.425864 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:15:34.426426 update_engine[1517]: E20250707 06:15:34.426395 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:15:34.426458 update_engine[1517]: I20250707 06:15:34.426446 1517 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 06:15:35.418030 systemd[1]: Started sshd@21-172.234.200.33:22-147.75.109.163:50206.service - OpenSSH per-connection server daemon (147.75.109.163:50206). Jul 7 06:15:35.757509 sshd[6577]: Accepted publickey for core from 147.75.109.163 port 50206 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:35.759495 sshd-session[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:35.765860 systemd-logind[1514]: New session 21 of user core. Jul 7 06:15:35.768963 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:15:36.075956 sshd[6579]: Connection closed by 147.75.109.163 port 50206 Jul 7 06:15:36.076928 sshd-session[6577]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:36.081710 systemd[1]: sshd@21-172.234.200.33:22-147.75.109.163:50206.service: Deactivated successfully. Jul 7 06:15:36.084773 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:15:36.085706 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:15:36.087295 systemd-logind[1514]: Removed session 21. Jul 7 06:15:41.138265 systemd[1]: Started sshd@22-172.234.200.33:22-147.75.109.163:59560.service - OpenSSH per-connection server daemon (147.75.109.163:59560). Jul 7 06:15:41.480349 sshd[6593]: Accepted publickey for core from 147.75.109.163 port 59560 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:41.482110 sshd-session[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:41.488288 systemd-logind[1514]: New session 22 of user core. Jul 7 06:15:41.492947 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:15:41.775326 sshd[6595]: Connection closed by 147.75.109.163 port 59560 Jul 7 06:15:41.776033 sshd-session[6593]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:41.784292 systemd[1]: sshd@22-172.234.200.33:22-147.75.109.163:59560.service: Deactivated successfully. Jul 7 06:15:41.786711 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:15:41.787516 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:15:41.789452 systemd-logind[1514]: Removed session 22. Jul 7 06:15:43.892657 containerd[1540]: time="2025-07-07T06:15:43.892622725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48376a28ef59cb78e2c38bff6fed39c083881feb9c1a87f0e531728a0e01f710\" id:\"e3c4bc8654060fddf56b0e0dca3b2f4973d5fe26742f5ed48e22e6572f832d5b\" pid:6619 exited_at:{seconds:1751868943 nanos:892386665}" Jul 7 06:15:44.422652 update_engine[1517]: I20250707 06:15:44.422586 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:15:44.423136 update_engine[1517]: I20250707 06:15:44.422864 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:15:44.423136 update_engine[1517]: I20250707 06:15:44.423080 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:15:44.423753 update_engine[1517]: E20250707 06:15:44.423717 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:15:44.423813 update_engine[1517]: I20250707 06:15:44.423774 1517 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:15:44.423813 update_engine[1517]: I20250707 06:15:44.423784 1517 omaha_request_action.cc:617] Omaha request response: Jul 7 06:15:44.423921 update_engine[1517]: E20250707 06:15:44.423885 1517 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 06:15:44.423921 update_engine[1517]: I20250707 06:15:44.423906 1517 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 06:15:44.423921 update_engine[1517]: I20250707 06:15:44.423911 1517 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:15:44.423921 update_engine[1517]: I20250707 06:15:44.423918 1517 update_attempter.cc:306] Processing Done. Jul 7 06:15:44.424077 update_engine[1517]: E20250707 06:15:44.423931 1517 update_attempter.cc:619] Update failed. Jul 7 06:15:44.424077 update_engine[1517]: I20250707 06:15:44.423936 1517 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 06:15:44.424077 update_engine[1517]: I20250707 06:15:44.423941 1517 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 06:15:44.424077 update_engine[1517]: I20250707 06:15:44.423947 1517 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 06:15:44.424077 update_engine[1517]: I20250707 06:15:44.424012 1517 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 06:15:44.424077 update_engine[1517]: I20250707 06:15:44.424031 1517 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 06:15:44.424077 update_engine[1517]: I20250707 06:15:44.424036 1517 omaha_request_action.cc:272] Request: Jul 7 06:15:44.424077 update_engine[1517]: Jul 7 06:15:44.424077 update_engine[1517]: Jul 7 06:15:44.424077 update_engine[1517]: Jul 7 06:15:44.424077 update_engine[1517]: Jul 7 06:15:44.424077 update_engine[1517]: Jul 7 06:15:44.424077 update_engine[1517]: Jul 7 06:15:44.424077 update_engine[1517]: I20250707 06:15:44.424043 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 06:15:44.424446 update_engine[1517]: I20250707 06:15:44.424188 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 06:15:44.424446 update_engine[1517]: I20250707 06:15:44.424362 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 06:15:44.424872 locksmithd[1555]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 06:15:44.425167 update_engine[1517]: E20250707 06:15:44.425060 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 06:15:44.425167 update_engine[1517]: I20250707 06:15:44.425102 1517 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 06:15:44.425167 update_engine[1517]: I20250707 06:15:44.425112 1517 omaha_request_action.cc:617] Omaha request response: Jul 7 06:15:44.425167 update_engine[1517]: I20250707 06:15:44.425118 1517 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:15:44.425167 update_engine[1517]: I20250707 06:15:44.425127 1517 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 06:15:44.425167 update_engine[1517]: I20250707 06:15:44.425135 1517 update_attempter.cc:306] Processing Done. Jul 7 06:15:44.425167 update_engine[1517]: I20250707 06:15:44.425144 1517 update_attempter.cc:310] Error event sent. Jul 7 06:15:44.425167 update_engine[1517]: I20250707 06:15:44.425154 1517 update_check_scheduler.cc:74] Next update check in 45m48s Jul 7 06:15:44.425386 locksmithd[1555]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 06:15:44.961218 containerd[1540]: time="2025-07-07T06:15:44.961177281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd672ee4b40f702d7c9f87d51e2e4dcdb65fbfabdaf6281b2a5a12ad129163c\" id:\"0b700e1a7d7972a9f365a6e33db2dcc613e91a6948d5c940d37d737e44c521b5\" pid:6640 exited_at:{seconds:1751868944 nanos:960758500}" Jul 7 06:15:46.845947 systemd[1]: Started sshd@23-172.234.200.33:22-147.75.109.163:46658.service - OpenSSH per-connection server daemon (147.75.109.163:46658). Jul 7 06:15:47.188342 sshd[6652]: Accepted publickey for core from 147.75.109.163 port 46658 ssh2: RSA SHA256:RJDeSiNPTWXaxADUhVJ5ppC20cnbEmaobjBhEu4KWl4 Jul 7 06:15:47.189863 sshd-session[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:47.194731 systemd-logind[1514]: New session 23 of user core. Jul 7 06:15:47.199944 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:15:47.495410 sshd[6654]: Connection closed by 147.75.109.163 port 46658 Jul 7 06:15:47.496539 sshd-session[6652]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:47.501341 systemd[1]: sshd@23-172.234.200.33:22-147.75.109.163:46658.service: Deactivated successfully. Jul 7 06:15:47.504037 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:15:47.504937 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:15:47.506716 systemd-logind[1514]: Removed session 23.