Jul 12 00:07:33.396512 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:07:33.396536 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:07:33.396545 kernel: KASLR enabled Jul 12 00:07:33.396551 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 12 00:07:33.396559 kernel: printk: bootconsole [pl11] enabled Jul 12 00:07:33.396564 kernel: efi: EFI v2.7 by EDK II Jul 12 00:07:33.396572 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 12 00:07:33.396578 kernel: random: crng init done Jul 12 00:07:33.396584 kernel: ACPI: Early table checksum verification disabled Jul 12 00:07:33.396589 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 12 00:07:33.396596 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396602 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396610 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 12 00:07:33.396616 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396623 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396630 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396636 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396644 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396650 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396657 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 12 00:07:33.396663 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396669 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 12 00:07:33.396676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 12 00:07:33.396682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 12 00:07:33.396688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 12 00:07:33.396695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 12 00:07:33.396701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 12 00:07:33.396707 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 12 00:07:33.396716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 12 00:07:33.396722 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 12 00:07:33.396729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 12 00:07:33.396735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 12 00:07:33.396742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 12 00:07:33.396748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 12 00:07:33.396754 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 12 00:07:33.396760 kernel: Zone ranges: Jul 12 00:07:33.396767 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 12 00:07:33.396773 kernel: DMA32 empty Jul 12 00:07:33.396779 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:07:33.396785 kernel: Movable zone start for each node Jul 12 00:07:33.396796 kernel: Early memory node ranges Jul 12 00:07:33.396803 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 12 00:07:33.396810 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 12 00:07:33.396817 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 12 00:07:33.396823 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 12 00:07:33.396832 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 12 00:07:33.396838 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 12 00:07:33.396845 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:07:33.396852 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:07:33.396859 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 12 00:07:33.396865 kernel: psci: probing for conduit method from ACPI. Jul 12 00:07:33.396872 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:07:33.396879 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:07:33.396885 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 12 00:07:33.396892 kernel: psci: SMC Calling Convention v1.4 Jul 12 00:07:33.396899 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 12 00:07:33.396905 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 12 00:07:33.396914 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:07:33.396920 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:07:33.396927 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:07:33.396934 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:07:33.396940 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:07:33.396947 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:07:33.396954 kernel: CPU features: detected: Spectre-BHB Jul 12 00:07:33.396961 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:07:33.396967 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:07:33.396974 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:07:33.396981 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 12 00:07:33.396990 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:07:33.396996 kernel: alternatives: applying boot alternatives Jul 12 00:07:33.397004 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:33.397012 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:07:33.397018 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:07:33.397025 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:07:33.397032 kernel: Fallback order for Node 0: 0 Jul 12 00:07:33.397039 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 12 00:07:33.397045 kernel: Policy zone: Normal Jul 12 00:07:33.397052 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:07:33.397059 kernel: software IO TLB: area num 2. Jul 12 00:07:33.397067 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 12 00:07:33.397074 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 12 00:07:33.397081 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:07:33.397088 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:07:33.397095 kernel: rcu: RCU event tracing is enabled. Jul 12 00:07:33.397102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:07:33.397109 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:07:33.397116 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:07:33.397122 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:07:33.397129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:07:33.397136 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:07:33.397144 kernel: GICv3: 960 SPIs implemented Jul 12 00:07:33.401219 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:07:33.401238 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:07:33.401246 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:07:33.401253 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 12 00:07:33.401260 kernel: ITS: No ITS available, not enabling LPIs Jul 12 00:07:33.401268 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:07:33.401275 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:33.401282 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:07:33.401289 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:07:33.401296 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:07:33.401311 kernel: Console: colour dummy device 80x25 Jul 12 00:07:33.401319 kernel: printk: console [tty1] enabled Jul 12 00:07:33.401326 kernel: ACPI: Core revision 20230628 Jul 12 00:07:33.401333 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:07:33.401340 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:07:33.401347 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:07:33.401354 kernel: landlock: Up and running. Jul 12 00:07:33.401362 kernel: SELinux: Initializing. Jul 12 00:07:33.401369 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.401376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.401385 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:07:33.401392 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:07:33.401399 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 12 00:07:33.401407 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 12 00:07:33.401414 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 12 00:07:33.401421 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:07:33.401429 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:07:33.401444 kernel: Remapping and enabling EFI services. Jul 12 00:07:33.401452 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:07:33.401459 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:07:33.401466 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 12 00:07:33.401476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:33.401483 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:07:33.401491 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:07:33.401498 kernel: SMP: Total of 2 processors activated. Jul 12 00:07:33.401505 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:07:33.401515 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 12 00:07:33.401522 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:07:33.401530 kernel: CPU features: detected: CRC32 instructions Jul 12 00:07:33.401537 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:07:33.401544 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:07:33.401552 kernel: CPU features: detected: Privileged Access Never Jul 12 00:07:33.401560 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:07:33.401567 kernel: alternatives: applying system-wide alternatives Jul 12 00:07:33.401575 kernel: devtmpfs: initialized Jul 12 00:07:33.401584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:07:33.401592 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:07:33.401600 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:07:33.401607 kernel: SMBIOS 3.1.0 present. Jul 12 00:07:33.401615 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 12 00:07:33.401623 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:07:33.401630 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:07:33.401637 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:07:33.401645 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:07:33.401654 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:07:33.401661 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 12 00:07:33.401669 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:07:33.401677 kernel: cpuidle: using governor menu Jul 12 00:07:33.401684 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:07:33.401692 kernel: ASID allocator initialised with 32768 entries Jul 12 00:07:33.401699 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:07:33.401706 kernel: Serial: AMBA PL011 UART driver Jul 12 00:07:33.401714 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:07:33.401723 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:07:33.401730 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:07:33.401738 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:07:33.401745 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:07:33.401753 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:07:33.401760 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:07:33.401768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:07:33.401775 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:07:33.401782 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:07:33.401791 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:07:33.401799 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:07:33.401806 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:07:33.401814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:07:33.401821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:07:33.401829 kernel: ACPI: Interpreter enabled Jul 12 00:07:33.401836 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:07:33.401844 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:07:33.401851 kernel: printk: console [ttyAMA0] enabled Jul 12 00:07:33.401861 kernel: printk: bootconsole [pl11] disabled Jul 12 00:07:33.401868 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 12 00:07:33.401876 kernel: iommu: Default domain type: Translated Jul 12 00:07:33.401883 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:07:33.401891 kernel: efivars: Registered efivars operations Jul 12 00:07:33.401898 kernel: vgaarb: loaded Jul 12 00:07:33.401906 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:07:33.401913 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:07:33.401921 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:07:33.401930 kernel: pnp: PnP ACPI init Jul 12 00:07:33.401937 kernel: pnp: PnP ACPI: found 0 devices Jul 12 00:07:33.401945 kernel: NET: Registered PF_INET protocol family Jul 12 00:07:33.401952 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:07:33.401960 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:07:33.401967 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:07:33.401974 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:07:33.401982 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:07:33.401990 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:07:33.401998 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.402006 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.402014 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:07:33.402021 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:07:33.402028 kernel: kvm [1]: HYP mode not available Jul 12 00:07:33.402036 kernel: Initialise system trusted keyrings Jul 12 00:07:33.402043 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:07:33.402050 kernel: Key type asymmetric registered Jul 12 00:07:33.402058 kernel: Asymmetric key parser 'x509' registered Jul 12 00:07:33.402067 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:07:33.402074 kernel: io scheduler mq-deadline registered Jul 12 00:07:33.402081 kernel: io scheduler kyber registered Jul 12 00:07:33.402088 kernel: io scheduler bfq registered Jul 12 00:07:33.402096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:07:33.402103 kernel: thunder_xcv, ver 1.0 Jul 12 00:07:33.402111 kernel: thunder_bgx, ver 1.0 Jul 12 00:07:33.402118 kernel: nicpf, ver 1.0 Jul 12 00:07:33.402126 kernel: nicvf, ver 1.0 Jul 12 00:07:33.402302 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:07:33.402381 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:07:32 UTC (1752278852) Jul 12 00:07:33.402392 kernel: efifb: probing for efifb Jul 12 00:07:33.402400 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 12 00:07:33.402407 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 12 00:07:33.402414 kernel: efifb: scrolling: redraw Jul 12 00:07:33.402422 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:07:33.402429 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:07:33.402439 kernel: fb0: EFI VGA frame buffer device Jul 12 00:07:33.402446 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 12 00:07:33.402454 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:07:33.402461 kernel: No ACPI PMU IRQ for CPU0 Jul 12 00:07:33.402468 kernel: No ACPI PMU IRQ for CPU1 Jul 12 00:07:33.402476 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 12 00:07:33.402483 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:07:33.402490 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:07:33.402498 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:07:33.402506 kernel: Segment Routing with IPv6 Jul 12 00:07:33.402514 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:07:33.402521 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:07:33.402529 kernel: Key type dns_resolver registered Jul 12 00:07:33.402536 kernel: registered taskstats version 1 Jul 12 00:07:33.402543 kernel: Loading compiled-in X.509 certificates Jul 12 00:07:33.402551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:07:33.402559 kernel: Key type .fscrypt registered Jul 12 00:07:33.402566 kernel: Key type fscrypt-provisioning registered Jul 12 00:07:33.402575 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:07:33.402582 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:07:33.402590 kernel: ima: No architecture policies found Jul 12 00:07:33.402597 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:07:33.402604 kernel: clk: Disabling unused clocks Jul 12 00:07:33.402612 kernel: Freeing unused kernel memory: 39424K Jul 12 00:07:33.402619 kernel: Run /init as init process Jul 12 00:07:33.402626 kernel: with arguments: Jul 12 00:07:33.402634 kernel: /init Jul 12 00:07:33.402643 kernel: with environment: Jul 12 00:07:33.402650 kernel: HOME=/ Jul 12 00:07:33.402657 kernel: TERM=linux Jul 12 00:07:33.402665 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:07:33.402675 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:07:33.402685 systemd[1]: Detected virtualization microsoft. Jul 12 00:07:33.402694 systemd[1]: Detected architecture arm64. Jul 12 00:07:33.402701 systemd[1]: Running in initrd. Jul 12 00:07:33.402711 systemd[1]: No hostname configured, using default hostname. Jul 12 00:07:33.402718 systemd[1]: Hostname set to . Jul 12 00:07:33.402727 systemd[1]: Initializing machine ID from random generator. Jul 12 00:07:33.402734 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:07:33.402742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:33.402751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:33.402760 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:07:33.402768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:07:33.402778 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:07:33.402786 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:07:33.402796 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:07:33.402804 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:07:33.402812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:33.402820 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:33.402830 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:33.402838 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:07:33.402846 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:07:33.402854 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:33.402862 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:33.402871 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:33.402879 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:07:33.402887 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:07:33.402895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:33.402904 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:33.402912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:33.402920 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:33.402928 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:07:33.402937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:07:33.402945 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:07:33.402952 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:07:33.402960 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:07:33.402968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:07:33.403000 systemd-journald[216]: Collecting audit messages is disabled. Jul 12 00:07:33.403022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:33.403031 systemd-journald[216]: Journal started Jul 12 00:07:33.403053 systemd-journald[216]: Runtime Journal (/run/log/journal/7ea9d4e6272f41c7a4a97db60b671501) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:07:33.409456 systemd-modules-load[217]: Inserted module 'overlay' Jul 12 00:07:33.436050 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:07:33.454641 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:07:33.457195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:33.467136 kernel: Bridge firewalling registered Jul 12 00:07:33.468231 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 12 00:07:33.476231 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:33.488389 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:07:33.500118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:33.512559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:33.537078 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:33.552650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:07:33.572430 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:07:33.598401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:33.612555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:33.621524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:33.636332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:07:33.650501 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:33.679456 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:07:33.694144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:33.710739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:07:33.730563 dracut-cmdline[250]: dracut-dracut-053 Jul 12 00:07:33.730563 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:33.772271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:33.785697 systemd-resolved[254]: Positive Trust Anchors: Jul 12 00:07:33.785707 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:33.785740 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:33.788066 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 12 00:07:33.789607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:33.811938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:33.887169 kernel: SCSI subsystem initialized Jul 12 00:07:33.896175 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:07:33.906179 kernel: iscsi: registered transport (tcp) Jul 12 00:07:33.925483 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:07:33.925556 kernel: QLogic iSCSI HBA Driver Jul 12 00:07:33.968258 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:33.989724 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:07:34.026176 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:07:34.026248 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:07:34.033304 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:07:34.084209 kernel: raid6: neonx8 gen() 15736 MB/s Jul 12 00:07:34.105202 kernel: raid6: neonx4 gen() 15663 MB/s Jul 12 00:07:34.125163 kernel: raid6: neonx2 gen() 13204 MB/s Jul 12 00:07:34.146202 kernel: raid6: neonx1 gen() 10485 MB/s Jul 12 00:07:34.166195 kernel: raid6: int64x8 gen() 6933 MB/s Jul 12 00:07:34.186180 kernel: raid6: int64x4 gen() 7349 MB/s Jul 12 00:07:34.207188 kernel: raid6: int64x2 gen() 6123 MB/s Jul 12 00:07:34.230879 kernel: raid6: int64x1 gen() 5056 MB/s Jul 12 00:07:34.230946 kernel: raid6: using algorithm neonx8 gen() 15736 MB/s Jul 12 00:07:34.254693 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 12 00:07:34.254774 kernel: raid6: using neon recovery algorithm Jul 12 00:07:34.269135 kernel: xor: measuring software checksum speed Jul 12 00:07:34.269176 kernel: 8regs : 19726 MB/sec Jul 12 00:07:34.272750 kernel: 32regs : 19603 MB/sec Jul 12 00:07:34.276507 kernel: arm64_neon : 27052 MB/sec Jul 12 00:07:34.280965 kernel: xor: using function: arm64_neon (27052 MB/sec) Jul 12 00:07:34.333178 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:07:34.345167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:34.362335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:34.386606 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jul 12 00:07:34.393210 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:34.414327 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:07:34.439516 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jul 12 00:07:34.470683 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:34.489493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:07:34.532136 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:34.554494 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:07:34.592866 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:34.608482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:34.625367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:34.640250 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:07:34.659302 kernel: hv_vmbus: Vmbus version:5.3 Jul 12 00:07:34.667391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:07:34.687900 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:34.741541 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 12 00:07:34.741566 kernel: hv_vmbus: registering driver hid_hyperv Jul 12 00:07:34.741576 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 12 00:07:34.741586 kernel: hv_vmbus: registering driver hv_netvsc Jul 12 00:07:34.741595 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 12 00:07:34.741753 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:07:34.741763 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:07:34.716031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:34.775588 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 12 00:07:34.775611 kernel: hv_vmbus: registering driver hv_storvsc Jul 12 00:07:34.716181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:34.821510 kernel: scsi host0: storvsc_host_t Jul 12 00:07:34.821718 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 12 00:07:34.821744 kernel: scsi host1: storvsc_host_t Jul 12 00:07:34.821839 kernel: PTP clock support registered Jul 12 00:07:34.775335 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:34.844462 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 12 00:07:34.797115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:34.797365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:34.877479 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: VF slot 1 added Jul 12 00:07:34.811816 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:34.850813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:34.919553 kernel: hv_utils: Registering HyperV Utility Driver Jul 12 00:07:34.919579 kernel: hv_vmbus: registering driver hv_utils Jul 12 00:07:34.919597 kernel: hv_utils: Heartbeat IC version 3.0 Jul 12 00:07:34.919607 kernel: hv_utils: Shutdown IC version 3.2 Jul 12 00:07:34.868575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:35.292629 kernel: hv_utils: TimeSync IC version 4.0 Jul 12 00:07:35.292654 kernel: hv_vmbus: registering driver hv_pci Jul 12 00:07:34.868750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:35.329838 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 12 00:07:35.330078 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:07:35.330090 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 12 00:07:35.330194 kernel: hv_pci 656047d6-af5d-4751-a91f-2e89a8827f23: PCI VMBus probing: Using version 0x10004 Jul 12 00:07:35.288557 systemd-resolved[254]: Clock change detected. Flushing caches. Jul 12 00:07:35.292427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:35.331957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:35.524424 kernel: hv_pci 656047d6-af5d-4751-a91f-2e89a8827f23: PCI host bridge to bus af5d:00 Jul 12 00:07:35.524676 kernel: pci_bus af5d:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 12 00:07:35.524791 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 12 00:07:35.531647 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 12 00:07:35.531768 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 00:07:35.531869 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 12 00:07:35.531965 kernel: pci_bus af5d:00: No busn resource found for root bus, will use [bus 00-ff] Jul 12 00:07:35.532064 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 12 00:07:35.532161 kernel: pci af5d:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 12 00:07:35.535999 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:35.611725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:35.611756 kernel: pci af5d:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:07:35.611788 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 00:07:35.611954 kernel: pci af5d:00:02.0: enabling Extended Tags Jul 12 00:07:35.614581 kernel: pci af5d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at af5d:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 12 00:07:35.634275 kernel: pci_bus af5d:00: busn_res: [bus 00-ff] end is updated to 00 Jul 12 00:07:35.634500 kernel: pci af5d:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:07:35.647348 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:35.680877 kernel: mlx5_core af5d:00:02.0: enabling device (0000 -> 0002) Jul 12 00:07:35.687272 kernel: mlx5_core af5d:00:02.0: firmware version: 16.31.2424 Jul 12 00:07:35.980479 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: VF registering: eth1 Jul 12 00:07:35.980733 kernel: mlx5_core af5d:00:02.0 eth1: joined to eth0 Jul 12 00:07:35.990307 kernel: mlx5_core af5d:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 12 00:07:36.002276 kernel: mlx5_core af5d:00:02.0 enP44893s1: renamed from eth1 Jul 12 00:07:36.182088 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (499) Jul 12 00:07:36.192077 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 12 00:07:36.216231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:07:36.236541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 12 00:07:36.290298 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (494) Jul 12 00:07:36.306172 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 12 00:07:36.314203 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 12 00:07:36.356428 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:07:36.382287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:36.394286 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:36.402290 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:37.403455 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:37.406278 disk-uuid[604]: The operation has completed successfully. Jul 12 00:07:37.468387 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:07:37.468509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:07:37.498486 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:07:37.511495 sh[717]: Success Jul 12 00:07:37.542359 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:07:37.720754 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:07:37.730414 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:07:37.740147 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:07:37.775140 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:07:37.775197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:37.782232 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:07:37.787624 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:07:37.792024 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:07:38.028002 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:07:38.033714 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:07:38.050555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:07:38.058898 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:07:38.101963 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:38.101995 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:38.102005 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:07:38.138738 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:07:38.148660 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:07:38.160024 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:38.168607 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:07:38.182564 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:07:38.239182 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:38.261447 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:38.292866 systemd-networkd[901]: lo: Link UP Jul 12 00:07:38.292876 systemd-networkd[901]: lo: Gained carrier Jul 12 00:07:38.297228 systemd-networkd[901]: Enumeration completed Jul 12 00:07:38.297526 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:38.310706 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:38.310710 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:38.311150 systemd[1]: Reached target network.target - Network. Jul 12 00:07:38.630283 kernel: mlx5_core af5d:00:02.0 enP44893s1: Link up Jul 12 00:07:38.878287 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: Data path switched to VF: enP44893s1 Jul 12 00:07:38.879051 systemd-networkd[901]: enP44893s1: Link UP Jul 12 00:07:38.879310 systemd-networkd[901]: eth0: Link UP Jul 12 00:07:38.879705 systemd-networkd[901]: eth0: Gained carrier Jul 12 00:07:38.879716 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:38.891907 systemd-networkd[901]: enP44893s1: Gained carrier Jul 12 00:07:38.917324 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:07:39.390897 ignition[838]: Ignition 2.19.0 Jul 12 00:07:39.390910 ignition[838]: Stage: fetch-offline Jul 12 00:07:39.395940 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:39.390952 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.390960 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.391056 ignition[838]: parsed url from cmdline: "" Jul 12 00:07:39.419556 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:07:39.391059 ignition[838]: no config URL provided Jul 12 00:07:39.391064 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:07:39.391071 ignition[838]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:07:39.391077 ignition[838]: failed to fetch config: resource requires networking Jul 12 00:07:39.391405 ignition[838]: Ignition finished successfully Jul 12 00:07:39.447733 ignition[910]: Ignition 2.19.0 Jul 12 00:07:39.447739 ignition[910]: Stage: fetch Jul 12 00:07:39.447930 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.447940 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.448039 ignition[910]: parsed url from cmdline: "" Jul 12 00:07:39.448043 ignition[910]: no config URL provided Jul 12 00:07:39.448051 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:07:39.448058 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:07:39.448081 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 12 00:07:39.567181 ignition[910]: GET result: OK Jul 12 00:07:39.567286 ignition[910]: config has been read from IMDS userdata Jul 12 00:07:39.567337 ignition[910]: parsing config with SHA512: 4e8e8e6e1e3183ea1ff71397c04be537bf9486f6044e52c0003ea54969f4253329d465c724f838d806599362b9f329834c38a2fbd69494f5c102e1e8d9b4595c Jul 12 00:07:39.571813 unknown[910]: fetched base config from "system" Jul 12 00:07:39.572321 ignition[910]: fetch: fetch complete Jul 12 00:07:39.571821 unknown[910]: fetched base config from "system" Jul 12 00:07:39.572326 ignition[910]: fetch: fetch passed Jul 12 00:07:39.571827 unknown[910]: fetched user config from "azure" Jul 12 00:07:39.572398 ignition[910]: Ignition finished successfully Jul 12 00:07:39.577343 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:07:39.617565 ignition[916]: Ignition 2.19.0 Jul 12 00:07:39.595558 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:07:39.617573 ignition[916]: Stage: kargs Jul 12 00:07:39.631598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:07:39.617804 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.653551 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:07:39.617815 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.619286 ignition[916]: kargs: kargs passed Jul 12 00:07:39.682294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:07:39.619347 ignition[916]: Ignition finished successfully Jul 12 00:07:39.693829 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:39.675168 ignition[921]: Ignition 2.19.0 Jul 12 00:07:39.703479 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:07:39.675175 ignition[921]: Stage: disks Jul 12 00:07:39.715691 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:07:39.675384 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.725639 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:39.675397 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.739247 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:39.676482 ignition[921]: disks: disks passed Jul 12 00:07:39.758557 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:07:39.676539 ignition[921]: Ignition finished successfully Jul 12 00:07:39.819630 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 12 00:07:39.827431 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:07:39.844654 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:07:39.907274 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:07:39.908204 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:07:39.914245 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:07:39.967385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:39.999882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jul 12 00:07:39.999908 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:39.999918 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:39.978048 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:07:40.014114 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:07:40.020794 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:07:40.044437 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:07:40.030397 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:07:40.030439 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:40.058972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:40.069792 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:07:40.090560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:07:40.234448 systemd-networkd[901]: eth0: Gained IPv6LL Jul 12 00:07:40.234773 systemd-networkd[901]: enP44893s1: Gained IPv6LL Jul 12 00:07:40.515748 coreos-metadata[959]: Jul 12 00:07:40.515 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:07:40.524501 coreos-metadata[959]: Jul 12 00:07:40.524 INFO Fetch successful Jul 12 00:07:40.524501 coreos-metadata[959]: Jul 12 00:07:40.524 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:07:40.543575 coreos-metadata[959]: Jul 12 00:07:40.537 INFO Fetch successful Jul 12 00:07:40.552906 coreos-metadata[959]: Jul 12 00:07:40.552 INFO wrote hostname ci-4081.3.4-n-047a586f92 to /sysroot/etc/hostname Jul 12 00:07:40.554401 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:07:40.797441 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:07:40.843458 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:07:40.852892 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:07:40.863104 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:07:41.719620 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:41.742513 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:07:41.750502 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:07:41.778129 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:41.772322 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:07:41.809913 ignition[1061]: INFO : Ignition 2.19.0 Jul 12 00:07:41.815601 ignition[1061]: INFO : Stage: mount Jul 12 00:07:41.815601 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:41.815601 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:41.813307 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:07:41.850060 ignition[1061]: INFO : mount: mount passed Jul 12 00:07:41.850060 ignition[1061]: INFO : Ignition finished successfully Jul 12 00:07:41.825935 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:07:41.854528 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:07:41.872558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:41.913224 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jul 12 00:07:41.913297 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:41.919701 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:41.924299 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:07:41.931285 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:07:41.932992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:41.960277 ignition[1090]: INFO : Ignition 2.19.0 Jul 12 00:07:41.960277 ignition[1090]: INFO : Stage: files Jul 12 00:07:41.969556 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:41.969556 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:41.969556 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:07:41.989641 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:07:41.989641 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:07:42.057775 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:07:42.066596 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:07:42.066596 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:07:42.058322 unknown[1090]: wrote ssh authorized keys file for user: core Jul 12 00:07:42.087515 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:07:42.087515 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 12 00:07:42.122580 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:07:42.209088 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 12 00:07:42.866515 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:07:43.098726 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:43.098726 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:07:43.136275 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: files passed Jul 12 00:07:43.154379 ignition[1090]: INFO : Ignition finished successfully Jul 12 00:07:43.148280 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:07:43.192207 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:07:43.211462 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:07:43.233624 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:07:43.279533 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:43.279533 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:43.235808 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:07:43.310657 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:43.264180 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:43.274700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:07:43.311574 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:07:43.364696 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:07:43.364840 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:07:43.377767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:07:43.390470 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:07:43.402384 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:07:43.419556 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:07:43.445678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:43.462529 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:07:43.481843 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:07:43.481974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:07:43.494647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:43.508274 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:43.522098 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:07:43.533860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:07:43.533945 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:43.551598 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:07:43.557741 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:07:43.569532 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:07:43.581134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:43.592622 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:43.604850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:07:43.617076 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:43.630052 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:07:43.641238 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:07:43.653889 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:07:43.664187 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:07:43.664310 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:43.680856 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:43.693000 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:43.706061 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:07:43.706129 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:43.719420 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:07:43.719504 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:43.738222 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:07:43.738314 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:43.746163 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:07:43.746220 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:07:43.759788 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:07:43.759847 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:07:43.804539 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:07:43.839673 ignition[1144]: INFO : Ignition 2.19.0 Jul 12 00:07:43.839673 ignition[1144]: INFO : Stage: umount Jul 12 00:07:43.839673 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:43.839673 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:43.839673 ignition[1144]: INFO : umount: umount passed Jul 12 00:07:43.839673 ignition[1144]: INFO : Ignition finished successfully Jul 12 00:07:43.817971 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:07:43.818059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:43.845514 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:07:43.853774 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:07:43.853856 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:43.877517 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:07:43.877591 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:43.893166 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:07:43.893707 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:07:43.893818 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:07:43.910793 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:07:43.910860 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:07:43.923065 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:07:43.923132 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:07:43.935354 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:07:43.935411 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:07:43.941574 systemd[1]: Stopped target network.target - Network. Jul 12 00:07:43.953600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:07:43.953661 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:43.967776 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:07:43.980093 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:07:43.983297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:43.993309 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:07:44.003605 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:07:44.014670 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:07:44.014725 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:44.026909 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:07:44.026972 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:44.039045 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:07:44.039108 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:07:44.046895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:07:44.046944 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:44.061878 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:07:44.076531 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:07:44.096385 systemd-networkd[901]: eth0: DHCPv6 lease lost Jul 12 00:07:44.096477 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:07:44.359853 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: Data path switched from VF: enP44893s1 Jul 12 00:07:44.096621 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:07:44.109655 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:07:44.109830 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:07:44.123452 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:07:44.123517 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:44.155744 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:07:44.161840 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:07:44.161925 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:44.178717 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:07:44.178786 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:44.190407 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:07:44.190469 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:44.203558 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:07:44.203616 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:44.220096 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:44.261237 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:07:44.261442 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:44.274045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:07:44.274107 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:44.285495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:07:44.285541 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:44.297696 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:07:44.297751 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:44.315131 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:07:44.315190 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:44.342445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:44.342512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:44.380526 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:07:44.396671 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:07:44.396753 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:44.410541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:44.410603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:44.423002 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:07:44.423108 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:07:44.472908 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:07:44.473050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:07:44.482806 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:07:44.663403 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 12 00:07:44.482864 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:44.504997 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:07:44.505121 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:07:44.515446 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:07:44.540567 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:07:44.559846 systemd[1]: Switching root. Jul 12 00:07:44.704803 systemd-journald[216]: Journal stopped Jul 12 00:07:33.396512 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:07:33.396536 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:07:33.396545 kernel: KASLR enabled Jul 12 00:07:33.396551 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 12 00:07:33.396559 kernel: printk: bootconsole [pl11] enabled Jul 12 00:07:33.396564 kernel: efi: EFI v2.7 by EDK II Jul 12 00:07:33.396572 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 12 00:07:33.396578 kernel: random: crng init done Jul 12 00:07:33.396584 kernel: ACPI: Early table checksum verification disabled Jul 12 00:07:33.396589 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 12 00:07:33.396596 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396602 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396610 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 12 00:07:33.396616 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396623 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396630 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396636 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396644 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396650 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396657 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 12 00:07:33.396663 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:07:33.396669 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 12 00:07:33.396676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 12 00:07:33.396682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 12 00:07:33.396688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 12 00:07:33.396695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 12 00:07:33.396701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 12 00:07:33.396707 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 12 00:07:33.396716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 12 00:07:33.396722 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 12 00:07:33.396729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 12 00:07:33.396735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 12 00:07:33.396742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 12 00:07:33.396748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 12 00:07:33.396754 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 12 00:07:33.396760 kernel: Zone ranges: Jul 12 00:07:33.396767 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 12 00:07:33.396773 kernel: DMA32 empty Jul 12 00:07:33.396779 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:07:33.396785 kernel: Movable zone start for each node Jul 12 00:07:33.396796 kernel: Early memory node ranges Jul 12 00:07:33.396803 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 12 00:07:33.396810 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 12 00:07:33.396817 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 12 00:07:33.396823 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 12 00:07:33.396832 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 12 00:07:33.396838 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 12 00:07:33.396845 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:07:33.396852 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:07:33.396859 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 12 00:07:33.396865 kernel: psci: probing for conduit method from ACPI. Jul 12 00:07:33.396872 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:07:33.396879 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:07:33.396885 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 12 00:07:33.396892 kernel: psci: SMC Calling Convention v1.4 Jul 12 00:07:33.396899 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 12 00:07:33.396905 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 12 00:07:33.396914 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:07:33.396920 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:07:33.396927 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:07:33.396934 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:07:33.396940 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:07:33.396947 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:07:33.396954 kernel: CPU features: detected: Spectre-BHB Jul 12 00:07:33.396961 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:07:33.396967 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:07:33.396974 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:07:33.396981 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 12 00:07:33.396990 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:07:33.396996 kernel: alternatives: applying boot alternatives Jul 12 00:07:33.397004 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:33.397012 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:07:33.397018 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:07:33.397025 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:07:33.397032 kernel: Fallback order for Node 0: 0 Jul 12 00:07:33.397039 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 12 00:07:33.397045 kernel: Policy zone: Normal Jul 12 00:07:33.397052 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:07:33.397059 kernel: software IO TLB: area num 2. Jul 12 00:07:33.397067 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 12 00:07:33.397074 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 12 00:07:33.397081 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:07:33.397088 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:07:33.397095 kernel: rcu: RCU event tracing is enabled. Jul 12 00:07:33.397102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:07:33.397109 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:07:33.397116 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:07:33.397122 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:07:33.397129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:07:33.397136 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:07:33.397144 kernel: GICv3: 960 SPIs implemented Jul 12 00:07:33.401219 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:07:33.401238 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:07:33.401246 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:07:33.401253 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 12 00:07:33.401260 kernel: ITS: No ITS available, not enabling LPIs Jul 12 00:07:33.401268 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:07:33.401275 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:33.401282 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:07:33.401289 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:07:33.401296 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:07:33.401311 kernel: Console: colour dummy device 80x25 Jul 12 00:07:33.401319 kernel: printk: console [tty1] enabled Jul 12 00:07:33.401326 kernel: ACPI: Core revision 20230628 Jul 12 00:07:33.401333 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:07:33.401340 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:07:33.401347 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:07:33.401354 kernel: landlock: Up and running. Jul 12 00:07:33.401362 kernel: SELinux: Initializing. Jul 12 00:07:33.401369 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.401376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.401385 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:07:33.401392 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:07:33.401399 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 12 00:07:33.401407 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 12 00:07:33.401414 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 12 00:07:33.401421 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:07:33.401429 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:07:33.401444 kernel: Remapping and enabling EFI services. Jul 12 00:07:33.401452 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:07:33.401459 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:07:33.401466 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 12 00:07:33.401476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:07:33.401483 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:07:33.401491 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:07:33.401498 kernel: SMP: Total of 2 processors activated. Jul 12 00:07:33.401505 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:07:33.401515 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 12 00:07:33.401522 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:07:33.401530 kernel: CPU features: detected: CRC32 instructions Jul 12 00:07:33.401537 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:07:33.401544 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:07:33.401552 kernel: CPU features: detected: Privileged Access Never Jul 12 00:07:33.401560 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:07:33.401567 kernel: alternatives: applying system-wide alternatives Jul 12 00:07:33.401575 kernel: devtmpfs: initialized Jul 12 00:07:33.401584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:07:33.401592 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:07:33.401600 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:07:33.401607 kernel: SMBIOS 3.1.0 present. Jul 12 00:07:33.401615 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 12 00:07:33.401623 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:07:33.401630 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:07:33.401637 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:07:33.401645 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:07:33.401654 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:07:33.401661 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 12 00:07:33.401669 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:07:33.401677 kernel: cpuidle: using governor menu Jul 12 00:07:33.401684 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:07:33.401692 kernel: ASID allocator initialised with 32768 entries Jul 12 00:07:33.401699 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:07:33.401706 kernel: Serial: AMBA PL011 UART driver Jul 12 00:07:33.401714 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:07:33.401723 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:07:33.401730 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:07:33.401738 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:07:33.401745 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:07:33.401753 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:07:33.401760 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:07:33.401768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:07:33.401775 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:07:33.401782 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:07:33.401791 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:07:33.401799 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:07:33.401806 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:07:33.401814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:07:33.401821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:07:33.401829 kernel: ACPI: Interpreter enabled Jul 12 00:07:33.401836 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:07:33.401844 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:07:33.401851 kernel: printk: console [ttyAMA0] enabled Jul 12 00:07:33.401861 kernel: printk: bootconsole [pl11] disabled Jul 12 00:07:33.401868 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 12 00:07:33.401876 kernel: iommu: Default domain type: Translated Jul 12 00:07:33.401883 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:07:33.401891 kernel: efivars: Registered efivars operations Jul 12 00:07:33.401898 kernel: vgaarb: loaded Jul 12 00:07:33.401906 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:07:33.401913 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:07:33.401921 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:07:33.401930 kernel: pnp: PnP ACPI init Jul 12 00:07:33.401937 kernel: pnp: PnP ACPI: found 0 devices Jul 12 00:07:33.401945 kernel: NET: Registered PF_INET protocol family Jul 12 00:07:33.401952 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:07:33.401960 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:07:33.401967 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:07:33.401974 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:07:33.401982 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:07:33.401990 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:07:33.401998 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.402006 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:33.402014 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:07:33.402021 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:07:33.402028 kernel: kvm [1]: HYP mode not available Jul 12 00:07:33.402036 kernel: Initialise system trusted keyrings Jul 12 00:07:33.402043 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:07:33.402050 kernel: Key type asymmetric registered Jul 12 00:07:33.402058 kernel: Asymmetric key parser 'x509' registered Jul 12 00:07:33.402067 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:07:33.402074 kernel: io scheduler mq-deadline registered Jul 12 00:07:33.402081 kernel: io scheduler kyber registered Jul 12 00:07:33.402088 kernel: io scheduler bfq registered Jul 12 00:07:33.402096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:07:33.402103 kernel: thunder_xcv, ver 1.0 Jul 12 00:07:33.402111 kernel: thunder_bgx, ver 1.0 Jul 12 00:07:33.402118 kernel: nicpf, ver 1.0 Jul 12 00:07:33.402126 kernel: nicvf, ver 1.0 Jul 12 00:07:33.402302 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:07:33.402381 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:07:32 UTC (1752278852) Jul 12 00:07:33.402392 kernel: efifb: probing for efifb Jul 12 00:07:33.402400 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 12 00:07:33.402407 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 12 00:07:33.402414 kernel: efifb: scrolling: redraw Jul 12 00:07:33.402422 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:07:33.402429 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:07:33.402439 kernel: fb0: EFI VGA frame buffer device Jul 12 00:07:33.402446 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 12 00:07:33.402454 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:07:33.402461 kernel: No ACPI PMU IRQ for CPU0 Jul 12 00:07:33.402468 kernel: No ACPI PMU IRQ for CPU1 Jul 12 00:07:33.402476 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 12 00:07:33.402483 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:07:33.402490 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:07:33.402498 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:07:33.402506 kernel: Segment Routing with IPv6 Jul 12 00:07:33.402514 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:07:33.402521 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:07:33.402529 kernel: Key type dns_resolver registered Jul 12 00:07:33.402536 kernel: registered taskstats version 1 Jul 12 00:07:33.402543 kernel: Loading compiled-in X.509 certificates Jul 12 00:07:33.402551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:07:33.402559 kernel: Key type .fscrypt registered Jul 12 00:07:33.402566 kernel: Key type fscrypt-provisioning registered Jul 12 00:07:33.402575 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:07:33.402582 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:07:33.402590 kernel: ima: No architecture policies found Jul 12 00:07:33.402597 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:07:33.402604 kernel: clk: Disabling unused clocks Jul 12 00:07:33.402612 kernel: Freeing unused kernel memory: 39424K Jul 12 00:07:33.402619 kernel: Run /init as init process Jul 12 00:07:33.402626 kernel: with arguments: Jul 12 00:07:33.402634 kernel: /init Jul 12 00:07:33.402643 kernel: with environment: Jul 12 00:07:33.402650 kernel: HOME=/ Jul 12 00:07:33.402657 kernel: TERM=linux Jul 12 00:07:33.402665 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:07:33.402675 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:07:33.402685 systemd[1]: Detected virtualization microsoft. Jul 12 00:07:33.402694 systemd[1]: Detected architecture arm64. Jul 12 00:07:33.402701 systemd[1]: Running in initrd. Jul 12 00:07:33.402711 systemd[1]: No hostname configured, using default hostname. Jul 12 00:07:33.402718 systemd[1]: Hostname set to . Jul 12 00:07:33.402727 systemd[1]: Initializing machine ID from random generator. Jul 12 00:07:33.402734 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:07:33.402742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:33.402751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:33.402760 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:07:33.402768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:07:33.402778 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:07:33.402786 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:07:33.402796 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:07:33.402804 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:07:33.402812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:33.402820 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:33.402830 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:33.402838 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:07:33.402846 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:07:33.402854 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:33.402862 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:33.402871 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:33.402879 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:07:33.402887 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:07:33.402895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:33.402904 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:33.402912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:33.402920 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:33.402928 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:07:33.402937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:07:33.402945 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:07:33.402952 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:07:33.402960 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:07:33.402968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:07:33.403000 systemd-journald[216]: Collecting audit messages is disabled. Jul 12 00:07:33.403022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:33.403031 systemd-journald[216]: Journal started Jul 12 00:07:33.403053 systemd-journald[216]: Runtime Journal (/run/log/journal/7ea9d4e6272f41c7a4a97db60b671501) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:07:33.409456 systemd-modules-load[217]: Inserted module 'overlay' Jul 12 00:07:33.436050 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:07:33.454641 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:07:33.457195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:33.467136 kernel: Bridge firewalling registered Jul 12 00:07:33.468231 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 12 00:07:33.476231 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:33.488389 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:07:33.500118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:33.512559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:33.537078 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:33.552650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:07:33.572430 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:07:33.598401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:33.612555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:33.621524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:33.636332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:07:33.650501 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:33.679456 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:07:33.694144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:33.710739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:07:33.730563 dracut-cmdline[250]: dracut-dracut-053 Jul 12 00:07:33.730563 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:33.772271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:33.785697 systemd-resolved[254]: Positive Trust Anchors: Jul 12 00:07:33.785707 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:33.785740 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:33.788066 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 12 00:07:33.789607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:33.811938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:33.887169 kernel: SCSI subsystem initialized Jul 12 00:07:33.896175 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:07:33.906179 kernel: iscsi: registered transport (tcp) Jul 12 00:07:33.925483 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:07:33.925556 kernel: QLogic iSCSI HBA Driver Jul 12 00:07:33.968258 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:33.989724 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:07:34.026176 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:07:34.026248 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:07:34.033304 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:07:34.084209 kernel: raid6: neonx8 gen() 15736 MB/s Jul 12 00:07:34.105202 kernel: raid6: neonx4 gen() 15663 MB/s Jul 12 00:07:34.125163 kernel: raid6: neonx2 gen() 13204 MB/s Jul 12 00:07:34.146202 kernel: raid6: neonx1 gen() 10485 MB/s Jul 12 00:07:34.166195 kernel: raid6: int64x8 gen() 6933 MB/s Jul 12 00:07:34.186180 kernel: raid6: int64x4 gen() 7349 MB/s Jul 12 00:07:34.207188 kernel: raid6: int64x2 gen() 6123 MB/s Jul 12 00:07:34.230879 kernel: raid6: int64x1 gen() 5056 MB/s Jul 12 00:07:34.230946 kernel: raid6: using algorithm neonx8 gen() 15736 MB/s Jul 12 00:07:34.254693 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 12 00:07:34.254774 kernel: raid6: using neon recovery algorithm Jul 12 00:07:34.269135 kernel: xor: measuring software checksum speed Jul 12 00:07:34.269176 kernel: 8regs : 19726 MB/sec Jul 12 00:07:34.272750 kernel: 32regs : 19603 MB/sec Jul 12 00:07:34.276507 kernel: arm64_neon : 27052 MB/sec Jul 12 00:07:34.280965 kernel: xor: using function: arm64_neon (27052 MB/sec) Jul 12 00:07:34.333178 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:07:34.345167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:34.362335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:34.386606 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jul 12 00:07:34.393210 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:34.414327 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:07:34.439516 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jul 12 00:07:34.470683 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:34.489493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:07:34.532136 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:34.554494 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:07:34.592866 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:34.608482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:34.625367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:34.640250 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:07:34.659302 kernel: hv_vmbus: Vmbus version:5.3 Jul 12 00:07:34.667391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:07:34.687900 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:34.741541 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 12 00:07:34.741566 kernel: hv_vmbus: registering driver hid_hyperv Jul 12 00:07:34.741576 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 12 00:07:34.741586 kernel: hv_vmbus: registering driver hv_netvsc Jul 12 00:07:34.741595 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 12 00:07:34.741753 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:07:34.741763 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:07:34.716031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:34.775588 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 12 00:07:34.775611 kernel: hv_vmbus: registering driver hv_storvsc Jul 12 00:07:34.716181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:34.821510 kernel: scsi host0: storvsc_host_t Jul 12 00:07:34.821718 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 12 00:07:34.821744 kernel: scsi host1: storvsc_host_t Jul 12 00:07:34.821839 kernel: PTP clock support registered Jul 12 00:07:34.775335 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:34.844462 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 12 00:07:34.797115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:34.797365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:34.877479 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: VF slot 1 added Jul 12 00:07:34.811816 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:34.850813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:34.919553 kernel: hv_utils: Registering HyperV Utility Driver Jul 12 00:07:34.919579 kernel: hv_vmbus: registering driver hv_utils Jul 12 00:07:34.919597 kernel: hv_utils: Heartbeat IC version 3.0 Jul 12 00:07:34.919607 kernel: hv_utils: Shutdown IC version 3.2 Jul 12 00:07:34.868575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:35.292629 kernel: hv_utils: TimeSync IC version 4.0 Jul 12 00:07:35.292654 kernel: hv_vmbus: registering driver hv_pci Jul 12 00:07:34.868750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:35.329838 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 12 00:07:35.330078 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:07:35.330090 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 12 00:07:35.330194 kernel: hv_pci 656047d6-af5d-4751-a91f-2e89a8827f23: PCI VMBus probing: Using version 0x10004 Jul 12 00:07:35.288557 systemd-resolved[254]: Clock change detected. Flushing caches. Jul 12 00:07:35.292427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:35.331957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:35.524424 kernel: hv_pci 656047d6-af5d-4751-a91f-2e89a8827f23: PCI host bridge to bus af5d:00 Jul 12 00:07:35.524676 kernel: pci_bus af5d:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 12 00:07:35.524791 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 12 00:07:35.531647 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 12 00:07:35.531768 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 00:07:35.531869 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 12 00:07:35.531965 kernel: pci_bus af5d:00: No busn resource found for root bus, will use [bus 00-ff] Jul 12 00:07:35.532064 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 12 00:07:35.532161 kernel: pci af5d:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 12 00:07:35.535999 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:35.611725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:35.611756 kernel: pci af5d:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:07:35.611788 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 00:07:35.611954 kernel: pci af5d:00:02.0: enabling Extended Tags Jul 12 00:07:35.614581 kernel: pci af5d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at af5d:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 12 00:07:35.634275 kernel: pci_bus af5d:00: busn_res: [bus 00-ff] end is updated to 00 Jul 12 00:07:35.634500 kernel: pci af5d:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:07:35.647348 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:35.680877 kernel: mlx5_core af5d:00:02.0: enabling device (0000 -> 0002) Jul 12 00:07:35.687272 kernel: mlx5_core af5d:00:02.0: firmware version: 16.31.2424 Jul 12 00:07:35.980479 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: VF registering: eth1 Jul 12 00:07:35.980733 kernel: mlx5_core af5d:00:02.0 eth1: joined to eth0 Jul 12 00:07:35.990307 kernel: mlx5_core af5d:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 12 00:07:36.002276 kernel: mlx5_core af5d:00:02.0 enP44893s1: renamed from eth1 Jul 12 00:07:36.182088 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (499) Jul 12 00:07:36.192077 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 12 00:07:36.216231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:07:36.236541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 12 00:07:36.290298 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (494) Jul 12 00:07:36.306172 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 12 00:07:36.314203 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 12 00:07:36.356428 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:07:36.382287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:36.394286 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:36.402290 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:37.403455 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:07:37.406278 disk-uuid[604]: The operation has completed successfully. Jul 12 00:07:37.468387 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:07:37.468509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:07:37.498486 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:07:37.511495 sh[717]: Success Jul 12 00:07:37.542359 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:07:37.720754 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:07:37.730414 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:07:37.740147 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:07:37.775140 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:07:37.775197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:37.782232 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:07:37.787624 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:07:37.792024 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:07:38.028002 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:07:38.033714 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:07:38.050555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:07:38.058898 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:07:38.101963 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:38.101995 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:38.102005 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:07:38.138738 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:07:38.148660 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:07:38.160024 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:38.168607 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:07:38.182564 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:07:38.239182 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:38.261447 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:38.292866 systemd-networkd[901]: lo: Link UP Jul 12 00:07:38.292876 systemd-networkd[901]: lo: Gained carrier Jul 12 00:07:38.297228 systemd-networkd[901]: Enumeration completed Jul 12 00:07:38.297526 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:38.310706 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:38.310710 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:38.311150 systemd[1]: Reached target network.target - Network. Jul 12 00:07:38.630283 kernel: mlx5_core af5d:00:02.0 enP44893s1: Link up Jul 12 00:07:38.878287 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: Data path switched to VF: enP44893s1 Jul 12 00:07:38.879051 systemd-networkd[901]: enP44893s1: Link UP Jul 12 00:07:38.879310 systemd-networkd[901]: eth0: Link UP Jul 12 00:07:38.879705 systemd-networkd[901]: eth0: Gained carrier Jul 12 00:07:38.879716 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:38.891907 systemd-networkd[901]: enP44893s1: Gained carrier Jul 12 00:07:38.917324 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:07:39.390897 ignition[838]: Ignition 2.19.0 Jul 12 00:07:39.390910 ignition[838]: Stage: fetch-offline Jul 12 00:07:39.395940 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:39.390952 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.390960 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.391056 ignition[838]: parsed url from cmdline: "" Jul 12 00:07:39.419556 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:07:39.391059 ignition[838]: no config URL provided Jul 12 00:07:39.391064 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:07:39.391071 ignition[838]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:07:39.391077 ignition[838]: failed to fetch config: resource requires networking Jul 12 00:07:39.391405 ignition[838]: Ignition finished successfully Jul 12 00:07:39.447733 ignition[910]: Ignition 2.19.0 Jul 12 00:07:39.447739 ignition[910]: Stage: fetch Jul 12 00:07:39.447930 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.447940 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.448039 ignition[910]: parsed url from cmdline: "" Jul 12 00:07:39.448043 ignition[910]: no config URL provided Jul 12 00:07:39.448051 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:07:39.448058 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:07:39.448081 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 12 00:07:39.567181 ignition[910]: GET result: OK Jul 12 00:07:39.567286 ignition[910]: config has been read from IMDS userdata Jul 12 00:07:39.567337 ignition[910]: parsing config with SHA512: 4e8e8e6e1e3183ea1ff71397c04be537bf9486f6044e52c0003ea54969f4253329d465c724f838d806599362b9f329834c38a2fbd69494f5c102e1e8d9b4595c Jul 12 00:07:39.571813 unknown[910]: fetched base config from "system" Jul 12 00:07:39.572321 ignition[910]: fetch: fetch complete Jul 12 00:07:39.571821 unknown[910]: fetched base config from "system" Jul 12 00:07:39.572326 ignition[910]: fetch: fetch passed Jul 12 00:07:39.571827 unknown[910]: fetched user config from "azure" Jul 12 00:07:39.572398 ignition[910]: Ignition finished successfully Jul 12 00:07:39.577343 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:07:39.617565 ignition[916]: Ignition 2.19.0 Jul 12 00:07:39.595558 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:07:39.617573 ignition[916]: Stage: kargs Jul 12 00:07:39.631598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:07:39.617804 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.653551 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:07:39.617815 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.619286 ignition[916]: kargs: kargs passed Jul 12 00:07:39.682294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:07:39.619347 ignition[916]: Ignition finished successfully Jul 12 00:07:39.693829 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:39.675168 ignition[921]: Ignition 2.19.0 Jul 12 00:07:39.703479 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:07:39.675175 ignition[921]: Stage: disks Jul 12 00:07:39.715691 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:07:39.675384 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.725639 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:39.675397 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:39.739247 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:39.676482 ignition[921]: disks: disks passed Jul 12 00:07:39.758557 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:07:39.676539 ignition[921]: Ignition finished successfully Jul 12 00:07:39.819630 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 12 00:07:39.827431 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:07:39.844654 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:07:39.907274 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:07:39.908204 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:07:39.914245 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:07:39.967385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:39.999882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jul 12 00:07:39.999908 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:39.999918 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:39.978048 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:07:40.014114 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:07:40.020794 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:07:40.044437 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:07:40.030397 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:07:40.030439 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:40.058972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:40.069792 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:07:40.090560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:07:40.234448 systemd-networkd[901]: eth0: Gained IPv6LL Jul 12 00:07:40.234773 systemd-networkd[901]: enP44893s1: Gained IPv6LL Jul 12 00:07:40.515748 coreos-metadata[959]: Jul 12 00:07:40.515 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:07:40.524501 coreos-metadata[959]: Jul 12 00:07:40.524 INFO Fetch successful Jul 12 00:07:40.524501 coreos-metadata[959]: Jul 12 00:07:40.524 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:07:40.543575 coreos-metadata[959]: Jul 12 00:07:40.537 INFO Fetch successful Jul 12 00:07:40.552906 coreos-metadata[959]: Jul 12 00:07:40.552 INFO wrote hostname ci-4081.3.4-n-047a586f92 to /sysroot/etc/hostname Jul 12 00:07:40.554401 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:07:40.797441 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:07:40.843458 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:07:40.852892 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:07:40.863104 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:07:41.719620 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:41.742513 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:07:41.750502 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:07:41.778129 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:41.772322 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:07:41.809913 ignition[1061]: INFO : Ignition 2.19.0 Jul 12 00:07:41.815601 ignition[1061]: INFO : Stage: mount Jul 12 00:07:41.815601 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:41.815601 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:41.813307 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:07:41.850060 ignition[1061]: INFO : mount: mount passed Jul 12 00:07:41.850060 ignition[1061]: INFO : Ignition finished successfully Jul 12 00:07:41.825935 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:07:41.854528 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:07:41.872558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:41.913224 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jul 12 00:07:41.913297 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:41.919701 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:41.924299 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:07:41.931285 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:07:41.932992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:41.960277 ignition[1090]: INFO : Ignition 2.19.0 Jul 12 00:07:41.960277 ignition[1090]: INFO : Stage: files Jul 12 00:07:41.969556 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:41.969556 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:41.969556 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:07:41.989641 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:07:41.989641 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:07:42.057775 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:07:42.066596 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:07:42.066596 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:07:42.058322 unknown[1090]: wrote ssh authorized keys file for user: core Jul 12 00:07:42.087515 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:07:42.087515 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 12 00:07:42.122580 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:07:42.209088 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:42.220295 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:42.306009 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 12 00:07:42.866515 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:07:43.098726 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:43.098726 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:07:43.136275 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:43.154379 ignition[1090]: INFO : files: files passed Jul 12 00:07:43.154379 ignition[1090]: INFO : Ignition finished successfully Jul 12 00:07:43.148280 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:07:43.192207 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:07:43.211462 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:07:43.233624 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:07:43.279533 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:43.279533 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:43.235808 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:07:43.310657 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:43.264180 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:43.274700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:07:43.311574 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:07:43.364696 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:07:43.364840 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:07:43.377767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:07:43.390470 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:07:43.402384 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:07:43.419556 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:07:43.445678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:43.462529 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:07:43.481843 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:07:43.481974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:07:43.494647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:43.508274 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:43.522098 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:07:43.533860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:07:43.533945 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:43.551598 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:07:43.557741 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:07:43.569532 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:07:43.581134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:43.592622 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:43.604850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:07:43.617076 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:43.630052 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:07:43.641238 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:07:43.653889 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:07:43.664187 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:07:43.664310 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:43.680856 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:43.693000 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:43.706061 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:07:43.706129 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:43.719420 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:07:43.719504 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:43.738222 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:07:43.738314 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:43.746163 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:07:43.746220 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:07:43.759788 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:07:43.759847 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:07:43.804539 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:07:43.839673 ignition[1144]: INFO : Ignition 2.19.0 Jul 12 00:07:43.839673 ignition[1144]: INFO : Stage: umount Jul 12 00:07:43.839673 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:43.839673 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:07:43.839673 ignition[1144]: INFO : umount: umount passed Jul 12 00:07:43.839673 ignition[1144]: INFO : Ignition finished successfully Jul 12 00:07:43.817971 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:07:43.818059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:43.845514 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:07:43.853774 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:07:43.853856 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:43.877517 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:07:43.877591 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:43.893166 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:07:43.893707 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:07:43.893818 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:07:43.910793 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:07:43.910860 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:07:43.923065 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:07:43.923132 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:07:43.935354 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:07:43.935411 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:07:43.941574 systemd[1]: Stopped target network.target - Network. Jul 12 00:07:43.953600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:07:43.953661 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:43.967776 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:07:43.980093 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:07:43.983297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:43.993309 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:07:44.003605 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:07:44.014670 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:07:44.014725 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:44.026909 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:07:44.026972 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:44.039045 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:07:44.039108 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:07:44.046895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:07:44.046944 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:44.061878 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:07:44.076531 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:07:44.096385 systemd-networkd[901]: eth0: DHCPv6 lease lost Jul 12 00:07:44.096477 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:07:44.359853 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: Data path switched from VF: enP44893s1 Jul 12 00:07:44.096621 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:07:44.109655 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:07:44.109830 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:07:44.123452 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:07:44.123517 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:44.155744 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:07:44.161840 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:07:44.161925 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:44.178717 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:07:44.178786 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:44.190407 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:07:44.190469 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:44.203558 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:07:44.203616 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:44.220096 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:44.261237 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:07:44.261442 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:44.274045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:07:44.274107 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:44.285495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:07:44.285541 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:44.297696 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:07:44.297751 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:44.315131 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:07:44.315190 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:44.342445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:44.342512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:44.380526 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:07:44.396671 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:07:44.396753 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:44.410541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:44.410603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:44.423002 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:07:44.423108 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:07:44.472908 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:07:44.473050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:07:44.482806 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:07:44.663403 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 12 00:07:44.482864 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:44.504997 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:07:44.505121 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:07:44.515446 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:07:44.540567 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:07:44.559846 systemd[1]: Switching root. Jul 12 00:07:44.704803 systemd-journald[216]: Journal stopped Jul 12 00:07:50.095602 kernel: mlx5_core af5d:00:02.0: poll_health:835:(pid 0): device's health compromised - reached miss count Jul 12 00:07:50.095637 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:07:50.095652 kernel: SELinux: policy capability open_perms=1 Jul 12 00:07:50.095661 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:07:50.095669 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:07:50.095677 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:07:50.095686 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:07:50.095695 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:07:50.095703 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:07:50.095712 kernel: audit: type=1403 audit(1752278865.708:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:07:50.095724 systemd[1]: Successfully loaded SELinux policy in 142.108ms. Jul 12 00:07:50.095735 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.572ms. Jul 12 00:07:50.095745 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:07:50.095755 systemd[1]: Detected virtualization microsoft. Jul 12 00:07:50.095767 systemd[1]: Detected architecture arm64. Jul 12 00:07:50.095776 systemd[1]: Detected first boot. Jul 12 00:07:50.095786 systemd[1]: Hostname set to . Jul 12 00:07:50.095796 systemd[1]: Initializing machine ID from random generator. Jul 12 00:07:50.095805 zram_generator::config[1186]: No configuration found. Jul 12 00:07:50.095815 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:07:50.095825 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:07:50.095837 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:07:50.095846 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:07:50.095859 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:07:50.095869 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:07:50.095880 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:07:50.095889 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:07:50.095899 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:07:50.095911 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:07:50.095921 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:07:50.095931 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:07:50.095941 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:50.095951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:50.095960 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:07:50.095970 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:07:50.095981 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:07:50.095993 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:07:50.096003 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:07:50.096013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:50.096023 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:07:50.096035 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:07:50.096045 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:07:50.096056 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:07:50.096066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:50.096078 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:07:50.096088 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:07:50.096098 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:07:50.096109 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:07:50.096119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:07:50.096129 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:50.096139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:50.096151 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:50.096162 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:07:50.096172 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:07:50.096182 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:07:50.096192 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:07:50.096202 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:07:50.096213 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:07:50.096224 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:07:50.096234 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:07:50.096244 systemd[1]: Reached target machines.target - Containers. Jul 12 00:07:50.096275 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:07:50.096288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:50.096299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:07:50.096309 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:07:50.096323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:50.096333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:07:50.096343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:50.096353 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:07:50.096363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:50.096374 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:07:50.096384 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:07:50.096394 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:07:50.096405 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:07:50.096415 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:07:50.096425 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:07:50.096435 kernel: fuse: init (API version 7.39) Jul 12 00:07:50.096444 kernel: loop: module loaded Jul 12 00:07:50.096454 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:07:50.096464 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:07:50.096474 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:07:50.096509 systemd-journald[1268]: Collecting audit messages is disabled. Jul 12 00:07:50.096533 systemd-journald[1268]: Journal started Jul 12 00:07:50.096554 systemd-journald[1268]: Runtime Journal (/run/log/journal/3519467f5e324edebc5c4a1f982ca683) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:07:49.013825 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:07:49.151213 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 00:07:49.151639 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:07:49.151980 systemd[1]: systemd-journald.service: Consumed 3.471s CPU time. Jul 12 00:07:50.126593 kernel: ACPI: bus type drm_connector registered Jul 12 00:07:50.126665 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:07:50.142597 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:07:50.142660 systemd[1]: Stopped verity-setup.service. Jul 12 00:07:50.159690 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:07:50.160663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:07:50.166913 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:07:50.173718 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:07:50.179448 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:07:50.185715 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:07:50.196012 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:07:50.202491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:50.211424 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:07:50.211668 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:07:50.220326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:50.220590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:50.228966 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:07:50.229218 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:07:50.237361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:50.237599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:50.246875 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:07:50.247114 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:07:50.254682 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:50.254932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:50.262056 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:50.270196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:07:50.281417 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:07:50.294087 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:07:50.303375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:50.320733 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:07:50.333394 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:07:50.341333 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:07:50.349270 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:07:50.349420 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:07:50.357730 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:07:50.367624 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:07:50.375541 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:07:50.381682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:50.383209 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:07:50.391883 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:07:50.400501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:07:50.401609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:07:50.408446 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:07:50.409669 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:07:50.418527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:07:50.428456 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:07:50.447488 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:07:50.458470 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:07:50.469809 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:07:50.480012 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:07:50.481452 systemd-journald[1268]: Time spent on flushing to /var/log/journal/3519467f5e324edebc5c4a1f982ca683 is 14.705ms for 900 entries. Jul 12 00:07:50.481452 systemd-journald[1268]: System Journal (/var/log/journal/3519467f5e324edebc5c4a1f982ca683) is 8.0M, max 2.6G, 2.6G free. Jul 12 00:07:50.529102 systemd-journald[1268]: Received client request to flush runtime journal. Jul 12 00:07:50.529208 kernel: loop0: detected capacity change from 0 to 114328 Jul 12 00:07:50.493969 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:07:50.510711 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:07:50.527702 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:07:50.536545 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:07:50.549457 udevadm[1324]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:07:50.550519 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:50.588626 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:07:50.590012 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:07:50.781946 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:07:50.794232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:07:50.895284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:07:50.896526 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 12 00:07:50.896540 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 12 00:07:50.903348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:50.949288 kernel: loop1: detected capacity change from 0 to 114432 Jul 12 00:07:51.251287 kernel: loop2: detected capacity change from 0 to 211168 Jul 12 00:07:51.297280 kernel: loop3: detected capacity change from 0 to 31320 Jul 12 00:07:51.588308 kernel: loop4: detected capacity change from 0 to 114328 Jul 12 00:07:51.597313 kernel: loop5: detected capacity change from 0 to 114432 Jul 12 00:07:51.606310 kernel: loop6: detected capacity change from 0 to 211168 Jul 12 00:07:51.616297 kernel: loop7: detected capacity change from 0 to 31320 Jul 12 00:07:51.618717 (sd-merge)[1344]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 12 00:07:51.619125 (sd-merge)[1344]: Merged extensions into '/usr'. Jul 12 00:07:51.631156 systemd[1]: Reloading requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:07:51.631178 systemd[1]: Reloading... Jul 12 00:07:51.709910 zram_generator::config[1367]: No configuration found. Jul 12 00:07:51.854855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:51.924326 systemd[1]: Reloading finished in 292 ms. Jul 12 00:07:51.953223 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:07:51.960717 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:07:51.975431 systemd[1]: Starting ensure-sysext.service... Jul 12 00:07:51.981317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:51.990505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:52.021411 systemd[1]: Reloading requested from client PID 1426 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:07:52.021435 systemd[1]: Reloading... Jul 12 00:07:52.024022 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:07:52.024337 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:07:52.024491 systemd-udevd[1428]: Using default interface naming scheme 'v255'. Jul 12 00:07:52.029408 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:07:52.029689 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Jul 12 00:07:52.029747 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Jul 12 00:07:52.049277 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:52.049287 systemd-tmpfiles[1427]: Skipping /boot Jul 12 00:07:52.063023 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:52.063036 systemd-tmpfiles[1427]: Skipping /boot Jul 12 00:07:52.120708 zram_generator::config[1456]: No configuration found. Jul 12 00:07:52.243495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:52.323785 systemd[1]: Reloading finished in 302 ms. Jul 12 00:07:52.342365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:52.358814 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:52.391583 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jul 12 00:07:52.406597 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:52.417023 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:07:52.425965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:52.427573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:52.445509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:07:52.456678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:52.468490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:52.478945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:52.492276 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:07:52.492550 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:07:52.506431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:52.517359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:52.524469 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:07:52.538540 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:07:52.547543 systemd[1]: Finished ensure-sysext.service. Jul 12 00:07:52.552587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:52.552751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:52.560819 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:07:52.560968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:07:52.568349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:52.568483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:52.577394 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:52.577548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:52.586539 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:07:52.605870 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:07:52.609496 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:07:52.624326 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jul 12 00:07:52.626381 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:07:52.626450 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:07:52.631987 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:07:52.645603 kernel: hv_vmbus: registering driver hv_balloon Jul 12 00:07:52.645691 kernel: hv_vmbus: registering driver hyperv_fb Jul 12 00:07:52.645707 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 12 00:07:52.652275 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 12 00:07:52.668327 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 12 00:07:52.668419 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 12 00:07:52.669278 kernel: Console: switching to colour dummy device 80x25 Jul 12 00:07:52.686161 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:07:52.688511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:52.710786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:52.710961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:52.731295 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1518) Jul 12 00:07:52.736634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:52.756446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:52.757310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:52.774335 augenrules[1608]: No rules Jul 12 00:07:52.779505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:52.790867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:52.817273 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:07:52.837962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:07:52.848008 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:07:52.864524 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:07:52.875488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:07:52.948531 systemd-resolved[1563]: Positive Trust Anchors: Jul 12 00:07:52.948878 systemd-resolved[1563]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:52.948915 systemd-resolved[1563]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:52.967899 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:52.981108 systemd-networkd[1562]: lo: Link UP Jul 12 00:07:52.981120 systemd-networkd[1562]: lo: Gained carrier Jul 12 00:07:52.984082 systemd-networkd[1562]: Enumeration completed Jul 12 00:07:52.984244 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:52.984705 systemd-networkd[1562]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:52.984711 systemd-networkd[1562]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:52.997463 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:07:53.007331 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:07:53.016417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:53.018482 systemd-resolved[1563]: Using system hostname 'ci-4081.3.4-n-047a586f92'. Jul 12 00:07:53.029535 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:07:53.038966 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:07:53.047665 lvm[1645]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:53.076337 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:07:53.089279 kernel: mlx5_core af5d:00:02.0 enP44893s1: Link up Jul 12 00:07:53.134283 kernel: hv_netvsc 0022487a-b470-0022-487a-b4700022487a eth0: Data path switched to VF: enP44893s1 Jul 12 00:07:53.134639 systemd-networkd[1562]: enP44893s1: Link UP Jul 12 00:07:53.134735 systemd-networkd[1562]: eth0: Link UP Jul 12 00:07:53.134739 systemd-networkd[1562]: eth0: Gained carrier Jul 12 00:07:53.134754 systemd-networkd[1562]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:53.137706 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:53.144520 systemd[1]: Reached target network.target - Network. Jul 12 00:07:53.150040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:53.157722 systemd-networkd[1562]: enP44893s1: Gained carrier Jul 12 00:07:53.172336 systemd-networkd[1562]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:07:53.347625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:53.380978 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:07:53.388596 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:07:54.890542 systemd-networkd[1562]: eth0: Gained IPv6LL Jul 12 00:07:54.893024 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:07:54.901429 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:07:55.018473 systemd-networkd[1562]: enP44893s1: Gained IPv6LL Jul 12 00:07:55.576973 ldconfig[1315]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:07:55.588795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:07:55.599466 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:07:55.628099 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:07:55.634732 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:55.640735 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:07:55.647616 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:07:55.654957 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:07:55.660929 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:07:55.668052 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:07:55.675234 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:07:55.675311 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:55.680456 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:55.686301 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:07:55.693906 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:07:55.722883 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:07:55.729301 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:07:55.735523 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:55.740944 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:55.746677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:55.746703 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:55.757374 systemd[1]: Starting chronyd.service - NTP client/server... Jul 12 00:07:55.765408 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:07:55.780507 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:07:55.787073 (chronyd)[1658]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 12 00:07:55.790164 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:07:55.799057 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:07:55.808534 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:07:55.809643 chronyd[1667]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 12 00:07:55.812166 chronyd[1667]: Timezone right/UTC failed leap second check, ignoring Jul 12 00:07:55.814448 chronyd[1667]: Loaded seccomp filter (level 2) Jul 12 00:07:55.818846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:07:55.818897 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 12 00:07:55.822242 jq[1664]: false Jul 12 00:07:55.826519 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 12 00:07:55.834550 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 12 00:07:55.834913 KVP[1668]: KVP starting; pid is:1668 Jul 12 00:07:55.839361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:55.846706 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:07:55.856433 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:07:55.865398 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:07:55.876734 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:07:55.883718 extend-filesystems[1665]: Found loop4 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found loop5 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found loop6 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found loop7 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda1 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda2 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda3 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found usr Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda4 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda6 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda7 Jul 12 00:07:55.889826 extend-filesystems[1665]: Found sda9 Jul 12 00:07:55.889826 extend-filesystems[1665]: Checking size of /dev/sda9 Jul 12 00:07:56.030892 extend-filesystems[1665]: Old size kept for /dev/sda9 Jul 12 00:07:56.030892 extend-filesystems[1665]: Found sr0 Jul 12 00:07:55.940892 dbus-daemon[1663]: [system] SELinux support is enabled Jul 12 00:07:56.159588 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1707) Jul 12 00:07:56.159620 kernel: hv_utils: KVP IC version 4.0 Jul 12 00:07:55.898353 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.044 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.059 INFO Fetch successful Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.059 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.072 INFO Fetch successful Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.073 INFO Fetching http://168.63.129.16/machine/40796f0a-6092-4d55-8a6a-459b2206bca3/07dab47a%2D2c92%2D458f%2D9ba4%2Dbdbd78488c3a.%5Fci%2D4081.3.4%2Dn%2D047a586f92?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.078 INFO Fetch successful Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.078 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:07:56.159831 coreos-metadata[1660]: Jul 12 00:07:56.093 INFO Fetch successful Jul 12 00:07:56.120850 KVP[1668]: KVP LIC Version: 3.1 Jul 12 00:07:55.913466 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:07:55.926883 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:07:55.927342 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:07:56.160623 update_engine[1689]: I20250712 00:07:56.060012 1689 main.cc:92] Flatcar Update Engine starting Jul 12 00:07:56.160623 update_engine[1689]: I20250712 00:07:56.078367 1689 update_check_scheduler.cc:74] Next update check in 11m27s Jul 12 00:07:55.936732 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:07:56.160901 jq[1697]: true Jul 12 00:07:55.967625 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:07:55.981190 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:07:55.990628 systemd[1]: Started chronyd.service - NTP client/server. Jul 12 00:07:56.025718 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:07:56.025869 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:07:56.026125 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:07:56.026305 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:07:56.077179 systemd-logind[1686]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 12 00:07:56.077689 systemd-logind[1686]: New seat seat0. Jul 12 00:07:56.138711 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:07:56.146941 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:07:56.149350 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:07:56.170515 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:07:56.183693 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:07:56.183871 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:07:56.211495 jq[1747]: true Jul 12 00:07:56.220813 (ntainerd)[1748]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:07:56.226732 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:07:56.237657 dbus-daemon[1663]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:07:56.247772 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:07:56.263091 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:07:56.264596 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:07:56.264766 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:07:56.274799 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:07:56.274927 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:07:56.294179 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:07:56.311388 tar[1746]: linux-arm64/LICENSE Jul 12 00:07:56.311635 tar[1746]: linux-arm64/helm Jul 12 00:07:56.359498 bash[1779]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:07:56.360904 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:07:56.375985 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:07:56.542012 locksmithd[1781]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:07:56.642117 sshd_keygen[1698]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:07:56.689199 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:07:56.709523 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:07:56.725540 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 12 00:07:56.736476 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:07:56.736645 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:07:56.752840 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:07:56.783567 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:07:56.800001 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:07:56.809376 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:07:56.821831 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:07:56.830734 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 12 00:07:56.856545 containerd[1748]: time="2025-07-12T00:07:56.856468220Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:07:56.906560 containerd[1748]: time="2025-07-12T00:07:56.906496980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:56.909872 containerd[1748]: time="2025-07-12T00:07:56.909829340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:56.909872 containerd[1748]: time="2025-07-12T00:07:56.909864700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:07:56.909964 containerd[1748]: time="2025-07-12T00:07:56.909880700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:07:56.910084 containerd[1748]: time="2025-07-12T00:07:56.910059620Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:07:56.910120 containerd[1748]: time="2025-07-12T00:07:56.910084340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:56.910168 containerd[1748]: time="2025-07-12T00:07:56.910147100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:56.910168 containerd[1748]: time="2025-07-12T00:07:56.910165340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:56.910882 containerd[1748]: time="2025-07-12T00:07:56.910849020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:56.910882 containerd[1748]: time="2025-07-12T00:07:56.910875300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:56.910951 containerd[1748]: time="2025-07-12T00:07:56.910891620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:56.910951 containerd[1748]: time="2025-07-12T00:07:56.910902900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:56.911013 containerd[1748]: time="2025-07-12T00:07:56.910991380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:56.911218 containerd[1748]: time="2025-07-12T00:07:56.911195820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:56.912381 containerd[1748]: time="2025-07-12T00:07:56.912349460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:56.912381 containerd[1748]: time="2025-07-12T00:07:56.912377620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:07:56.912492 containerd[1748]: time="2025-07-12T00:07:56.912468860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:07:56.912533 containerd[1748]: time="2025-07-12T00:07:56.912517820Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:07:56.931398 containerd[1748]: time="2025-07-12T00:07:56.931352260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:07:56.932832 containerd[1748]: time="2025-07-12T00:07:56.932790020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:07:56.932884 containerd[1748]: time="2025-07-12T00:07:56.932845060Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:07:56.932884 containerd[1748]: time="2025-07-12T00:07:56.932871540Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:07:56.932935 containerd[1748]: time="2025-07-12T00:07:56.932891420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:07:56.933108 containerd[1748]: time="2025-07-12T00:07:56.933084220Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:07:56.935823 containerd[1748]: time="2025-07-12T00:07:56.934788220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:07:56.935823 containerd[1748]: time="2025-07-12T00:07:56.934928580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:07:56.935910 containerd[1748]: time="2025-07-12T00:07:56.935832420Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:07:56.935910 containerd[1748]: time="2025-07-12T00:07:56.935868420Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:07:56.935910 containerd[1748]: time="2025-07-12T00:07:56.935904020Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.935966 containerd[1748]: time="2025-07-12T00:07:56.935924620Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.935966 containerd[1748]: time="2025-07-12T00:07:56.935941020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.935966 containerd[1748]: time="2025-07-12T00:07:56.935959980Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.936026 containerd[1748]: time="2025-07-12T00:07:56.935980220Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.936026 containerd[1748]: time="2025-07-12T00:07:56.936000060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.936026 containerd[1748]: time="2025-07-12T00:07:56.936016340Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.936077 containerd[1748]: time="2025-07-12T00:07:56.936033220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:07:56.936077 containerd[1748]: time="2025-07-12T00:07:56.936059740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936118 containerd[1748]: time="2025-07-12T00:07:56.936077980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936118 containerd[1748]: time="2025-07-12T00:07:56.936094340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936118 containerd[1748]: time="2025-07-12T00:07:56.936109660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936220 containerd[1748]: time="2025-07-12T00:07:56.936126060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936220 containerd[1748]: time="2025-07-12T00:07:56.936143780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936220 containerd[1748]: time="2025-07-12T00:07:56.936159380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936220 containerd[1748]: time="2025-07-12T00:07:56.936175820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936220 containerd[1748]: time="2025-07-12T00:07:56.936192540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936220 containerd[1748]: time="2025-07-12T00:07:56.936212140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936349 containerd[1748]: time="2025-07-12T00:07:56.936227860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936349 containerd[1748]: time="2025-07-12T00:07:56.936249380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936349 containerd[1748]: time="2025-07-12T00:07:56.936284620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936349 containerd[1748]: time="2025-07-12T00:07:56.936305980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:07:56.936349 containerd[1748]: time="2025-07-12T00:07:56.936333380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936349 containerd[1748]: time="2025-07-12T00:07:56.936349380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936455 containerd[1748]: time="2025-07-12T00:07:56.936364540Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:07:56.936455 containerd[1748]: time="2025-07-12T00:07:56.936423220Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:07:56.936455 containerd[1748]: time="2025-07-12T00:07:56.936446460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:07:56.936513 containerd[1748]: time="2025-07-12T00:07:56.936458420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:07:56.936513 containerd[1748]: time="2025-07-12T00:07:56.936475100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:07:56.936513 containerd[1748]: time="2025-07-12T00:07:56.936489260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.936513 containerd[1748]: time="2025-07-12T00:07:56.936506300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:07:56.936590 containerd[1748]: time="2025-07-12T00:07:56.936517180Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:07:56.936590 containerd[1748]: time="2025-07-12T00:07:56.936548420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:07:56.938318 containerd[1748]: time="2025-07-12T00:07:56.937606260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:07:56.938318 containerd[1748]: time="2025-07-12T00:07:56.937676060Z" level=info msg="Connect containerd service" Jul 12 00:07:56.938318 containerd[1748]: time="2025-07-12T00:07:56.937709020Z" level=info msg="using legacy CRI server" Jul 12 00:07:56.938318 containerd[1748]: time="2025-07-12T00:07:56.937716140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:07:56.938318 containerd[1748]: time="2025-07-12T00:07:56.937791020Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:07:56.940147 containerd[1748]: time="2025-07-12T00:07:56.940114700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:07:56.940932 containerd[1748]: time="2025-07-12T00:07:56.940888260Z" level=info msg="Start subscribing containerd event" Jul 12 00:07:56.940973 containerd[1748]: time="2025-07-12T00:07:56.940944220Z" level=info msg="Start recovering state" Jul 12 00:07:56.941028 containerd[1748]: time="2025-07-12T00:07:56.941006580Z" level=info msg="Start event monitor" Jul 12 00:07:56.941028 containerd[1748]: time="2025-07-12T00:07:56.941017580Z" level=info msg="Start snapshots syncer" Jul 12 00:07:56.941028 containerd[1748]: time="2025-07-12T00:07:56.941026940Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:07:56.941091 containerd[1748]: time="2025-07-12T00:07:56.941034420Z" level=info msg="Start streaming server" Jul 12 00:07:56.944840 containerd[1748]: time="2025-07-12T00:07:56.941917660Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:07:56.944840 containerd[1748]: time="2025-07-12T00:07:56.941964260Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:07:56.942865 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:07:56.953056 containerd[1748]: time="2025-07-12T00:07:56.951739700Z" level=info msg="containerd successfully booted in 0.096238s" Jul 12 00:07:57.025875 tar[1746]: linux-arm64/README.md Jul 12 00:07:57.038318 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:07:57.198276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:57.205877 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:57.207967 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:07:57.218333 systemd[1]: Startup finished in 736ms (kernel) + 12.436s (initrd) + 11.650s (userspace) = 24.823s. Jul 12 00:07:57.511297 login[1811]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:57.512961 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:57.536212 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:07:57.536423 systemd-logind[1686]: New session 1 of user core. Jul 12 00:07:57.543581 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:07:57.547077 systemd-logind[1686]: New session 2 of user core. Jul 12 00:07:57.567247 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:07:57.574877 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:07:57.580137 (systemd)[1838]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:07:57.637574 kubelet[1826]: E0712 00:07:57.637457 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:57.639899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:57.640051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:57.723644 systemd[1838]: Queued start job for default target default.target. Jul 12 00:07:57.730443 systemd[1838]: Created slice app.slice - User Application Slice. Jul 12 00:07:57.730548 systemd[1838]: Reached target paths.target - Paths. Jul 12 00:07:57.730634 systemd[1838]: Reached target timers.target - Timers. Jul 12 00:07:57.731912 systemd[1838]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:07:57.743244 systemd[1838]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:07:57.743378 systemd[1838]: Reached target sockets.target - Sockets. Jul 12 00:07:57.743391 systemd[1838]: Reached target basic.target - Basic System. Jul 12 00:07:57.743425 systemd[1838]: Reached target default.target - Main User Target. Jul 12 00:07:57.743451 systemd[1838]: Startup finished in 156ms. Jul 12 00:07:57.743817 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:07:57.752416 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:07:57.753158 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:07:58.554282 waagent[1814]: 2025-07-12T00:07:58.553396Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 12 00:07:58.559429 waagent[1814]: 2025-07-12T00:07:58.559372Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 12 00:07:58.564037 waagent[1814]: 2025-07-12T00:07:58.563984Z INFO Daemon Daemon Python: 3.11.9 Jul 12 00:07:58.570280 waagent[1814]: 2025-07-12T00:07:58.569356Z INFO Daemon Daemon Run daemon Jul 12 00:07:58.574595 waagent[1814]: 2025-07-12T00:07:58.574499Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 12 00:07:58.583582 waagent[1814]: 2025-07-12T00:07:58.583520Z INFO Daemon Daemon Using waagent for provisioning Jul 12 00:07:58.589304 waagent[1814]: 2025-07-12T00:07:58.589228Z INFO Daemon Daemon Activate resource disk Jul 12 00:07:58.594121 waagent[1814]: 2025-07-12T00:07:58.594059Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 12 00:07:58.605232 waagent[1814]: 2025-07-12T00:07:58.605158Z INFO Daemon Daemon Found device: None Jul 12 00:07:58.609718 waagent[1814]: 2025-07-12T00:07:58.609662Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 12 00:07:58.618053 waagent[1814]: 2025-07-12T00:07:58.618002Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 12 00:07:58.631008 waagent[1814]: 2025-07-12T00:07:58.630947Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:07:58.636752 waagent[1814]: 2025-07-12T00:07:58.636703Z INFO Daemon Daemon Running default provisioning handler Jul 12 00:07:58.649129 waagent[1814]: 2025-07-12T00:07:58.649063Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 12 00:07:58.662596 waagent[1814]: 2025-07-12T00:07:58.662537Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 12 00:07:58.672260 waagent[1814]: 2025-07-12T00:07:58.672202Z INFO Daemon Daemon cloud-init is enabled: False Jul 12 00:07:58.677342 waagent[1814]: 2025-07-12T00:07:58.677292Z INFO Daemon Daemon Copying ovf-env.xml Jul 12 00:07:58.739702 waagent[1814]: 2025-07-12T00:07:58.738907Z INFO Daemon Daemon Successfully mounted dvd Jul 12 00:07:58.767699 waagent[1814]: 2025-07-12T00:07:58.767605Z INFO Daemon Daemon Detect protocol endpoint Jul 12 00:07:58.768316 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 12 00:07:58.772720 waagent[1814]: 2025-07-12T00:07:58.772653Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:07:58.778501 waagent[1814]: 2025-07-12T00:07:58.778447Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 12 00:07:58.784920 waagent[1814]: 2025-07-12T00:07:58.784875Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 12 00:07:58.790144 waagent[1814]: 2025-07-12T00:07:58.790098Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 12 00:07:58.795278 waagent[1814]: 2025-07-12T00:07:58.795219Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 12 00:07:58.828950 waagent[1814]: 2025-07-12T00:07:58.828857Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 12 00:07:58.835533 waagent[1814]: 2025-07-12T00:07:58.835494Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 12 00:07:58.840847 waagent[1814]: 2025-07-12T00:07:58.840787Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 12 00:07:59.285367 waagent[1814]: 2025-07-12T00:07:59.281244Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 12 00:07:59.287960 waagent[1814]: 2025-07-12T00:07:59.287890Z INFO Daemon Daemon Forcing an update of the goal state. Jul 12 00:07:59.297108 waagent[1814]: 2025-07-12T00:07:59.297058Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 12 00:07:59.317338 waagent[1814]: 2025-07-12T00:07:59.317249Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 12 00:07:59.322983 waagent[1814]: 2025-07-12T00:07:59.322937Z INFO Daemon Jul 12 00:07:59.325743 waagent[1814]: 2025-07-12T00:07:59.325701Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2b71d650-3ccd-4f3d-a520-c805d858d6ed eTag: 18145433422757953559 source: Fabric] Jul 12 00:07:59.336940 waagent[1814]: 2025-07-12T00:07:59.336896Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 12 00:07:59.344043 waagent[1814]: 2025-07-12T00:07:59.343996Z INFO Daemon Jul 12 00:07:59.346878 waagent[1814]: 2025-07-12T00:07:59.346838Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 12 00:07:59.364735 waagent[1814]: 2025-07-12T00:07:59.364701Z INFO Daemon Daemon Downloading artifacts profile blob Jul 12 00:07:59.448888 waagent[1814]: 2025-07-12T00:07:59.448813Z INFO Daemon Downloaded certificate {'thumbprint': 'F7993D6374EDD44CCF8A837C1C26F33D78F272D2', 'hasPrivateKey': False} Jul 12 00:07:59.458736 waagent[1814]: 2025-07-12T00:07:59.458690Z INFO Daemon Downloaded certificate {'thumbprint': 'BE345163E9EF3077A7DD90F46C82E0B3FA3A8975', 'hasPrivateKey': True} Jul 12 00:07:59.468226 waagent[1814]: 2025-07-12T00:07:59.468180Z INFO Daemon Fetch goal state completed Jul 12 00:07:59.479370 waagent[1814]: 2025-07-12T00:07:59.479306Z INFO Daemon Daemon Starting provisioning Jul 12 00:07:59.484350 waagent[1814]: 2025-07-12T00:07:59.484303Z INFO Daemon Daemon Handle ovf-env.xml. Jul 12 00:07:59.489089 waagent[1814]: 2025-07-12T00:07:59.489041Z INFO Daemon Daemon Set hostname [ci-4081.3.4-n-047a586f92] Jul 12 00:07:59.509287 waagent[1814]: 2025-07-12T00:07:59.508702Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-n-047a586f92] Jul 12 00:07:59.515646 waagent[1814]: 2025-07-12T00:07:59.515584Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 12 00:07:59.522648 waagent[1814]: 2025-07-12T00:07:59.522598Z INFO Daemon Daemon Primary interface is [eth0] Jul 12 00:07:59.549578 systemd-networkd[1562]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:59.549587 systemd-networkd[1562]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:59.549611 systemd-networkd[1562]: eth0: DHCP lease lost Jul 12 00:07:59.550809 waagent[1814]: 2025-07-12T00:07:59.550666Z INFO Daemon Daemon Create user account if not exists Jul 12 00:07:59.556519 waagent[1814]: 2025-07-12T00:07:59.556465Z INFO Daemon Daemon User core already exists, skip useradd Jul 12 00:07:59.557325 systemd-networkd[1562]: eth0: DHCPv6 lease lost Jul 12 00:07:59.562529 waagent[1814]: 2025-07-12T00:07:59.562471Z INFO Daemon Daemon Configure sudoer Jul 12 00:07:59.567398 waagent[1814]: 2025-07-12T00:07:59.567342Z INFO Daemon Daemon Configure sshd Jul 12 00:07:59.571793 waagent[1814]: 2025-07-12T00:07:59.571731Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 12 00:07:59.584573 waagent[1814]: 2025-07-12T00:07:59.584508Z INFO Daemon Daemon Deploy ssh public key. Jul 12 00:07:59.594315 systemd-networkd[1562]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:08:00.676780 waagent[1814]: 2025-07-12T00:08:00.672350Z INFO Daemon Daemon Provisioning complete Jul 12 00:08:00.690242 waagent[1814]: 2025-07-12T00:08:00.690191Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 12 00:08:00.696322 waagent[1814]: 2025-07-12T00:08:00.696278Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 12 00:08:00.706368 waagent[1814]: 2025-07-12T00:08:00.706326Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 12 00:08:00.832953 waagent[1896]: 2025-07-12T00:08:00.832337Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 12 00:08:00.832953 waagent[1896]: 2025-07-12T00:08:00.832481Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 12 00:08:00.832953 waagent[1896]: 2025-07-12T00:08:00.832535Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 12 00:08:00.879399 waagent[1896]: 2025-07-12T00:08:00.879321Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 12 00:08:00.879729 waagent[1896]: 2025-07-12T00:08:00.879691Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:08:00.879865 waagent[1896]: 2025-07-12T00:08:00.879832Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:08:00.888174 waagent[1896]: 2025-07-12T00:08:00.888106Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 12 00:08:00.894231 waagent[1896]: 2025-07-12T00:08:00.894192Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 12 00:08:00.894804 waagent[1896]: 2025-07-12T00:08:00.894762Z INFO ExtHandler Jul 12 00:08:00.894956 waagent[1896]: 2025-07-12T00:08:00.894922Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: cc96c50b-e41c-45bf-8194-59e7a404d450 eTag: 18145433422757953559 source: Fabric] Jul 12 00:08:00.895378 waagent[1896]: 2025-07-12T00:08:00.895335Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 12 00:08:00.896292 waagent[1896]: 2025-07-12T00:08:00.895995Z INFO ExtHandler Jul 12 00:08:00.896292 waagent[1896]: 2025-07-12T00:08:00.896071Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 12 00:08:00.900113 waagent[1896]: 2025-07-12T00:08:00.900074Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 12 00:08:00.972889 waagent[1896]: 2025-07-12T00:08:00.972754Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F7993D6374EDD44CCF8A837C1C26F33D78F272D2', 'hasPrivateKey': False} Jul 12 00:08:00.973243 waagent[1896]: 2025-07-12T00:08:00.973197Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BE345163E9EF3077A7DD90F46C82E0B3FA3A8975', 'hasPrivateKey': True} Jul 12 00:08:00.973710 waagent[1896]: 2025-07-12T00:08:00.973664Z INFO ExtHandler Fetch goal state completed Jul 12 00:08:00.989570 waagent[1896]: 2025-07-12T00:08:00.989516Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1896 Jul 12 00:08:00.989719 waagent[1896]: 2025-07-12T00:08:00.989680Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 12 00:08:00.991322 waagent[1896]: 2025-07-12T00:08:00.991276Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 12 00:08:00.991713 waagent[1896]: 2025-07-12T00:08:00.991672Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 12 00:08:01.021656 waagent[1896]: 2025-07-12T00:08:01.021609Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 12 00:08:01.021858 waagent[1896]: 2025-07-12T00:08:01.021817Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 12 00:08:01.027741 waagent[1896]: 2025-07-12T00:08:01.027692Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 12 00:08:01.033768 systemd[1]: Reloading requested from client PID 1911 ('systemctl') (unit waagent.service)... Jul 12 00:08:01.033786 systemd[1]: Reloading... Jul 12 00:08:01.117297 zram_generator::config[1951]: No configuration found. Jul 12 00:08:01.212160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:01.308724 systemd[1]: Reloading finished in 274 ms. Jul 12 00:08:01.329531 waagent[1896]: 2025-07-12T00:08:01.329436Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 12 00:08:01.336143 systemd[1]: Reloading requested from client PID 1999 ('systemctl') (unit waagent.service)... Jul 12 00:08:01.336157 systemd[1]: Reloading... Jul 12 00:08:01.414536 zram_generator::config[2036]: No configuration found. Jul 12 00:08:01.523065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:01.615008 systemd[1]: Reloading finished in 278 ms. Jul 12 00:08:01.641222 waagent[1896]: 2025-07-12T00:08:01.640561Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 12 00:08:01.641222 waagent[1896]: 2025-07-12T00:08:01.640743Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 12 00:08:02.530288 waagent[1896]: 2025-07-12T00:08:02.529217Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 12 00:08:02.530288 waagent[1896]: 2025-07-12T00:08:02.529859Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 12 00:08:02.531065 waagent[1896]: 2025-07-12T00:08:02.531012Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 12 00:08:02.531184 waagent[1896]: 2025-07-12T00:08:02.531137Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:08:02.531339 waagent[1896]: 2025-07-12T00:08:02.531290Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:08:02.531792 waagent[1896]: 2025-07-12T00:08:02.531735Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 12 00:08:02.532079 waagent[1896]: 2025-07-12T00:08:02.531986Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 12 00:08:02.532172 waagent[1896]: 2025-07-12T00:08:02.532132Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:08:02.532243 waagent[1896]: 2025-07-12T00:08:02.532213Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:08:02.532422 waagent[1896]: 2025-07-12T00:08:02.532376Z INFO EnvHandler ExtHandler Configure routes Jul 12 00:08:02.532492 waagent[1896]: 2025-07-12T00:08:02.532462Z INFO EnvHandler ExtHandler Gateway:None Jul 12 00:08:02.532545 waagent[1896]: 2025-07-12T00:08:02.532518Z INFO EnvHandler ExtHandler Routes:None Jul 12 00:08:02.533351 waagent[1896]: 2025-07-12T00:08:02.533298Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 12 00:08:02.533351 waagent[1896]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 12 00:08:02.533351 waagent[1896]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 12 00:08:02.533351 waagent[1896]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 12 00:08:02.533351 waagent[1896]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:08:02.533351 waagent[1896]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:08:02.533351 waagent[1896]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:08:02.533882 waagent[1896]: 2025-07-12T00:08:02.533830Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 12 00:08:02.534061 waagent[1896]: 2025-07-12T00:08:02.534024Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 12 00:08:02.534449 waagent[1896]: 2025-07-12T00:08:02.534400Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 12 00:08:02.534573 waagent[1896]: 2025-07-12T00:08:02.534538Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 12 00:08:02.535176 waagent[1896]: 2025-07-12T00:08:02.535133Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 12 00:08:02.541941 waagent[1896]: 2025-07-12T00:08:02.541905Z INFO ExtHandler ExtHandler Jul 12 00:08:02.543295 waagent[1896]: 2025-07-12T00:08:02.542084Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 07770a33-7e2c-4423-bf85-b0ae9c86d1e7 correlation b87814e9-946a-43d3-b7d2-c4cf9b507b3c created: 2025-07-12T00:06:48.904259Z] Jul 12 00:08:02.543295 waagent[1896]: 2025-07-12T00:08:02.542469Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 12 00:08:02.543295 waagent[1896]: 2025-07-12T00:08:02.543033Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 12 00:08:02.585275 waagent[1896]: 2025-07-12T00:08:02.585199Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1CBF7A1A-15BF-46A5-8401-D7A52574A8EA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 12 00:08:02.625177 waagent[1896]: 2025-07-12T00:08:02.625106Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 12 00:08:02.625177 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:08:02.625177 waagent[1896]: pkts bytes target prot opt in out source destination Jul 12 00:08:02.625177 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:08:02.625177 waagent[1896]: pkts bytes target prot opt in out source destination Jul 12 00:08:02.625177 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:08:02.625177 waagent[1896]: pkts bytes target prot opt in out source destination Jul 12 00:08:02.625177 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 12 00:08:02.625177 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 12 00:08:02.625177 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 12 00:08:02.628326 waagent[1896]: 2025-07-12T00:08:02.628280Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 12 00:08:02.628326 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:08:02.628326 waagent[1896]: pkts bytes target prot opt in out source destination Jul 12 00:08:02.628326 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:08:02.628326 waagent[1896]: pkts bytes target prot opt in out source destination Jul 12 00:08:02.628326 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:08:02.628326 waagent[1896]: pkts bytes target prot opt in out source destination Jul 12 00:08:02.628326 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 12 00:08:02.628326 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 12 00:08:02.628326 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 12 00:08:02.628812 waagent[1896]: 2025-07-12T00:08:02.628781Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 12 00:08:02.695639 waagent[1896]: 2025-07-12T00:08:02.695579Z INFO MonitorHandler ExtHandler Network interfaces: Jul 12 00:08:02.695639 waagent[1896]: Executing ['ip', '-a', '-o', 'link']: Jul 12 00:08:02.695639 waagent[1896]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 12 00:08:02.695639 waagent[1896]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:b4:70 brd ff:ff:ff:ff:ff:ff Jul 12 00:08:02.695639 waagent[1896]: 3: enP44893s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:b4:70 brd ff:ff:ff:ff:ff:ff\ altname enP44893p0s2 Jul 12 00:08:02.695639 waagent[1896]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 12 00:08:02.695639 waagent[1896]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 12 00:08:02.695639 waagent[1896]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 12 00:08:02.695639 waagent[1896]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 12 00:08:02.695639 waagent[1896]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 12 00:08:02.695639 waagent[1896]: 2: eth0 inet6 fe80::222:48ff:fe7a:b470/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 12 00:08:02.695639 waagent[1896]: 3: enP44893s1 inet6 fe80::222:48ff:fe7a:b470/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 12 00:08:05.702406 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:08:05.710581 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.16.10:51202.service - OpenSSH per-connection server daemon (10.200.16.10:51202). Jul 12 00:08:06.198818 sshd[2119]: Accepted publickey for core from 10.200.16.10 port 51202 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:08:06.200082 sshd[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:06.204325 systemd-logind[1686]: New session 3 of user core. Jul 12 00:08:06.210399 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:08:06.623523 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.16.10:51208.service - OpenSSH per-connection server daemon (10.200.16.10:51208). Jul 12 00:08:07.053095 sshd[2124]: Accepted publickey for core from 10.200.16.10 port 51208 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:08:07.055179 sshd[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:07.060145 systemd-logind[1686]: New session 4 of user core. Jul 12 00:08:07.066491 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:08:07.384638 sshd[2124]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:07.388306 systemd[1]: sshd@1-10.200.20.17:22-10.200.16.10:51208.service: Deactivated successfully. Jul 12 00:08:07.389827 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:08:07.390462 systemd-logind[1686]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:08:07.391664 systemd-logind[1686]: Removed session 4. Jul 12 00:08:07.468826 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.16.10:51212.service - OpenSSH per-connection server daemon (10.200.16.10:51212). Jul 12 00:08:07.837172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:08:07.843442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:07.919043 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 51212 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:08:07.920820 sshd[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:07.926928 systemd-logind[1686]: New session 5 of user core. Jul 12 00:08:07.933751 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:08:07.942425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:07.945488 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:08.086344 kubelet[2142]: E0712 00:08:08.086293 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:08.090050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:08.090207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:08.259291 sshd[2131]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:08.262513 systemd[1]: sshd@2-10.200.20.17:22-10.200.16.10:51212.service: Deactivated successfully. Jul 12 00:08:08.264590 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:08:08.267052 systemd-logind[1686]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:08:08.268029 systemd-logind[1686]: Removed session 5. Jul 12 00:08:08.350580 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.16.10:51226.service - OpenSSH per-connection server daemon (10.200.16.10:51226). Jul 12 00:08:08.821976 sshd[2153]: Accepted publickey for core from 10.200.16.10 port 51226 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:08:08.823211 sshd[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:08.826941 systemd-logind[1686]: New session 6 of user core. Jul 12 00:08:08.841398 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:08:09.168666 sshd[2153]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:09.172616 systemd[1]: sshd@3-10.200.20.17:22-10.200.16.10:51226.service: Deactivated successfully. Jul 12 00:08:09.174215 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:08:09.175096 systemd-logind[1686]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:08:09.176054 systemd-logind[1686]: Removed session 6. Jul 12 00:08:09.252059 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.16.10:51236.service - OpenSSH per-connection server daemon (10.200.16.10:51236). Jul 12 00:08:09.700080 sshd[2160]: Accepted publickey for core from 10.200.16.10 port 51236 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:08:09.701430 sshd[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:09.705087 systemd-logind[1686]: New session 7 of user core. Jul 12 00:08:09.714388 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:08:10.054178 sudo[2163]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:08:10.054548 sudo[2163]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:10.084104 sudo[2163]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:10.174009 sshd[2160]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:10.177567 systemd[1]: sshd@4-10.200.20.17:22-10.200.16.10:51236.service: Deactivated successfully. Jul 12 00:08:10.179082 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:08:10.179918 systemd-logind[1686]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:08:10.180922 systemd-logind[1686]: Removed session 7. Jul 12 00:08:10.262220 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.16.10:41006.service - OpenSSH per-connection server daemon (10.200.16.10:41006). Jul 12 00:08:10.752802 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 41006 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:08:10.754166 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:10.758860 systemd-logind[1686]: New session 8 of user core. Jul 12 00:08:10.764426 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:08:11.029093 sudo[2172]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:08:11.029920 sudo[2172]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:11.033067 sudo[2172]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:11.037406 sudo[2171]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:08:11.037665 sudo[2171]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:11.050944 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:08:11.051739 auditctl[2175]: No rules Jul 12 00:08:11.052194 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:08:11.052379 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:08:11.054885 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:08:11.077771 augenrules[2193]: No rules Jul 12 00:08:11.079128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:08:11.080229 sudo[2171]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:11.165856 sshd[2168]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:11.169370 systemd[1]: sshd@5-10.200.20.17:22-10.200.16.10:41006.service: Deactivated successfully. Jul 12 00:08:11.170970 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:08:11.171673 systemd-logind[1686]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:08:11.172654 systemd-logind[1686]: Removed session 8. Jul 12 00:08:11.243759 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.16.10:41010.service - OpenSSH per-connection server daemon (10.200.16.10:41010). Jul 12 00:08:11.697191 sshd[2201]: Accepted publickey for core from 10.200.16.10 port 41010 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:08:11.698550 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:11.703541 systemd-logind[1686]: New session 9 of user core. Jul 12 00:08:11.709421 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:08:11.956024 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:08:11.956354 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:12.762600 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:08:12.762609 (dockerd)[2219]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:08:13.468422 dockerd[2219]: time="2025-07-12T00:08:13.468363660Z" level=info msg="Starting up" Jul 12 00:08:13.852757 dockerd[2219]: time="2025-07-12T00:08:13.852642420Z" level=info msg="Loading containers: start." Jul 12 00:08:14.020277 kernel: Initializing XFRM netlink socket Jul 12 00:08:14.144390 systemd-networkd[1562]: docker0: Link UP Jul 12 00:08:14.175439 dockerd[2219]: time="2025-07-12T00:08:14.175402420Z" level=info msg="Loading containers: done." Jul 12 00:08:14.212814 dockerd[2219]: time="2025-07-12T00:08:14.212757300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:08:14.213029 dockerd[2219]: time="2025-07-12T00:08:14.212880540Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:08:14.213029 dockerd[2219]: time="2025-07-12T00:08:14.213000260Z" level=info msg="Daemon has completed initialization" Jul 12 00:08:14.287163 dockerd[2219]: time="2025-07-12T00:08:14.287071660Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:08:14.287535 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:08:14.915266 containerd[1748]: time="2025-07-12T00:08:14.915216540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 12 00:08:15.881162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333450186.mount: Deactivated successfully. Jul 12 00:08:17.417300 containerd[1748]: time="2025-07-12T00:08:17.416321820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.422291 containerd[1748]: time="2025-07-12T00:08:17.422177180Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 12 00:08:17.425649 containerd[1748]: time="2025-07-12T00:08:17.425584820Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.433122 containerd[1748]: time="2025-07-12T00:08:17.433073140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.434211 containerd[1748]: time="2025-07-12T00:08:17.434172060Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.51889168s" Jul 12 00:08:17.434889 containerd[1748]: time="2025-07-12T00:08:17.434212500Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 12 00:08:17.435743 containerd[1748]: time="2025-07-12T00:08:17.435592260Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 12 00:08:18.201082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:08:18.207201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:18.300420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:18.303241 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:18.872440 kubelet[2417]: E0712 00:08:18.589914 2417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:18.592319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:18.592475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:19.345944 containerd[1748]: time="2025-07-12T00:08:19.345892540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:19.352336 containerd[1748]: time="2025-07-12T00:08:19.352287980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 12 00:08:19.358946 containerd[1748]: time="2025-07-12T00:08:19.358876460Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:19.371141 containerd[1748]: time="2025-07-12T00:08:19.371100100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:19.372230 containerd[1748]: time="2025-07-12T00:08:19.372198660Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.93657664s" Jul 12 00:08:19.372283 containerd[1748]: time="2025-07-12T00:08:19.372234540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 12 00:08:19.373092 containerd[1748]: time="2025-07-12T00:08:19.373030100Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 12 00:08:19.619069 chronyd[1667]: Selected source PHC0 Jul 12 00:08:21.160211 containerd[1748]: time="2025-07-12T00:08:21.160152355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:21.167336 containerd[1748]: time="2025-07-12T00:08:21.167308028Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 12 00:08:21.180174 containerd[1748]: time="2025-07-12T00:08:21.180123935Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:21.186950 containerd[1748]: time="2025-07-12T00:08:21.186880488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:21.188076 containerd[1748]: time="2025-07-12T00:08:21.187946447Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.814884227s" Jul 12 00:08:21.188076 containerd[1748]: time="2025-07-12T00:08:21.187982127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 12 00:08:21.188669 containerd[1748]: time="2025-07-12T00:08:21.188521686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 12 00:08:22.397023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636003759.mount: Deactivated successfully. Jul 12 00:08:22.760704 containerd[1748]: time="2025-07-12T00:08:22.760653442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:22.764366 containerd[1748]: time="2025-07-12T00:08:22.764186639Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 12 00:08:22.772218 containerd[1748]: time="2025-07-12T00:08:22.772153151Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:22.780442 containerd[1748]: time="2025-07-12T00:08:22.780369182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:22.781476 containerd[1748]: time="2025-07-12T00:08:22.781005261Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.592452695s" Jul 12 00:08:22.781476 containerd[1748]: time="2025-07-12T00:08:22.781041301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 12 00:08:22.781658 containerd[1748]: time="2025-07-12T00:08:22.781628341Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 12 00:08:23.465125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717519251.mount: Deactivated successfully. Jul 12 00:08:24.904880 containerd[1748]: time="2025-07-12T00:08:24.904830087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:24.908238 containerd[1748]: time="2025-07-12T00:08:24.908208363Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 12 00:08:24.921392 containerd[1748]: time="2025-07-12T00:08:24.921342350Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:24.926528 containerd[1748]: time="2025-07-12T00:08:24.926498465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:24.927742 containerd[1748]: time="2025-07-12T00:08:24.927611504Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.145948763s" Jul 12 00:08:24.927742 containerd[1748]: time="2025-07-12T00:08:24.927647224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 12 00:08:24.928488 containerd[1748]: time="2025-07-12T00:08:24.928336823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:08:25.882574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110019760.mount: Deactivated successfully. Jul 12 00:08:25.939226 containerd[1748]: time="2025-07-12T00:08:25.938458909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:25.941662 containerd[1748]: time="2025-07-12T00:08:25.941633825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 12 00:08:25.948562 containerd[1748]: time="2025-07-12T00:08:25.948517938Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:25.954920 containerd[1748]: time="2025-07-12T00:08:25.954870252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:25.956096 containerd[1748]: time="2025-07-12T00:08:25.955597171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.027228708s" Jul 12 00:08:25.956096 containerd[1748]: time="2025-07-12T00:08:25.955634171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:08:25.956297 containerd[1748]: time="2025-07-12T00:08:25.956250490Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 12 00:08:26.655018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174877570.mount: Deactivated successfully. Jul 12 00:08:28.701063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:08:28.706455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:28.715429 containerd[1748]: time="2025-07-12T00:08:28.714364244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:28.724751 containerd[1748]: time="2025-07-12T00:08:28.724709654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 12 00:08:28.734357 containerd[1748]: time="2025-07-12T00:08:28.733053110Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:28.804457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:28.806717 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:28.840222 kubelet[2560]: E0712 00:08:28.840155 2560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:28.842952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:28.843221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:29.276313 containerd[1748]: time="2025-07-12T00:08:29.275379044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:29.277510 containerd[1748]: time="2025-07-12T00:08:29.276621041Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.320317591s" Jul 12 00:08:29.277510 containerd[1748]: time="2025-07-12T00:08:29.276659640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 12 00:08:34.057283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:34.068535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:34.097835 systemd[1]: Reloading requested from client PID 2594 ('systemctl') (unit session-9.scope)... Jul 12 00:08:34.097853 systemd[1]: Reloading... Jul 12 00:08:34.199288 zram_generator::config[2634]: No configuration found. Jul 12 00:08:34.322826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:34.419151 systemd[1]: Reloading finished in 320 ms. Jul 12 00:08:34.477852 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:08:34.478116 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:08:34.478400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:34.489250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:36.395367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:36.404562 (kubelet)[2699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:08:36.439843 kubelet[2699]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:36.439843 kubelet[2699]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:08:36.439843 kubelet[2699]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:36.440206 kubelet[2699]: I0712 00:08:36.439896 2699 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:08:37.103278 kubelet[2699]: I0712 00:08:37.102327 2699 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:08:37.103278 kubelet[2699]: I0712 00:08:37.102358 2699 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:08:37.103278 kubelet[2699]: I0712 00:08:37.102623 2699 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:08:37.115972 kubelet[2699]: E0712 00:08:37.115923 2699 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 00:08:37.120281 kubelet[2699]: I0712 00:08:37.118729 2699 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:08:37.126564 kubelet[2699]: E0712 00:08:37.126529 2699 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:08:37.126674 kubelet[2699]: I0712 00:08:37.126661 2699 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:08:37.129611 kubelet[2699]: I0712 00:08:37.129591 2699 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:08:37.130901 kubelet[2699]: I0712 00:08:37.130849 2699 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:08:37.131234 kubelet[2699]: I0712 00:08:37.131073 2699 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-n-047a586f92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:08:37.131409 kubelet[2699]: I0712 00:08:37.131395 2699 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:08:37.131464 kubelet[2699]: I0712 00:08:37.131456 2699 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:08:37.131638 kubelet[2699]: I0712 00:08:37.131625 2699 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:37.134391 kubelet[2699]: I0712 00:08:37.134371 2699 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:08:37.134475 kubelet[2699]: I0712 00:08:37.134466 2699 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:08:37.134557 kubelet[2699]: I0712 00:08:37.134548 2699 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:08:37.135799 kubelet[2699]: I0712 00:08:37.135780 2699 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:08:37.136857 kubelet[2699]: E0712 00:08:37.136816 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-047a586f92&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:08:37.137204 kubelet[2699]: E0712 00:08:37.137171 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:08:37.137511 kubelet[2699]: I0712 00:08:37.137485 2699 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:08:37.138060 kubelet[2699]: I0712 00:08:37.138031 2699 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:08:37.138111 kubelet[2699]: W0712 00:08:37.138085 2699 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:08:37.140267 kubelet[2699]: I0712 00:08:37.140219 2699 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:08:37.140935 kubelet[2699]: I0712 00:08:37.140331 2699 server.go:1289] "Started kubelet" Jul 12 00:08:37.143358 kubelet[2699]: I0712 00:08:37.143313 2699 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:08:37.143770 kubelet[2699]: I0712 00:08:37.143708 2699 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:08:37.144026 kubelet[2699]: I0712 00:08:37.143996 2699 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:08:37.144396 kubelet[2699]: I0712 00:08:37.144376 2699 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:08:37.146172 kubelet[2699]: I0712 00:08:37.146152 2699 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:08:37.148093 kubelet[2699]: E0712 00:08:37.147250 2699 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-n-047a586f92.18515864238cc781 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-n-047a586f92,UID:ci-4081.3.4-n-047a586f92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-n-047a586f92,},FirstTimestamp:2025-07-12 00:08:37.140236161 +0000 UTC m=+0.732514034,LastTimestamp:2025-07-12 00:08:37.140236161 +0000 UTC m=+0.732514034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-n-047a586f92,}" Jul 12 00:08:37.148450 kubelet[2699]: I0712 00:08:37.148419 2699 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:08:37.150586 kubelet[2699]: E0712 00:08:37.150554 2699 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-047a586f92\" not found" Jul 12 00:08:37.150586 kubelet[2699]: I0712 00:08:37.150582 2699 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:08:37.151174 kubelet[2699]: I0712 00:08:37.151136 2699 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:08:37.151248 kubelet[2699]: I0712 00:08:37.151205 2699 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:08:37.153103 kubelet[2699]: E0712 00:08:37.153067 2699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-047a586f92?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Jul 12 00:08:37.153419 kubelet[2699]: I0712 00:08:37.153400 2699 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:08:37.153928 kubelet[2699]: I0712 00:08:37.153898 2699 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:08:37.155547 kubelet[2699]: E0712 00:08:37.155527 2699 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:08:37.155770 kubelet[2699]: I0712 00:08:37.155756 2699 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:08:37.170227 kubelet[2699]: I0712 00:08:37.170177 2699 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:08:37.171141 kubelet[2699]: I0712 00:08:37.171114 2699 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:08:37.171141 kubelet[2699]: I0712 00:08:37.171141 2699 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:08:37.171196 kubelet[2699]: I0712 00:08:37.171163 2699 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:08:37.171196 kubelet[2699]: I0712 00:08:37.171170 2699 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:08:37.171242 kubelet[2699]: E0712 00:08:37.171209 2699 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:08:37.177709 kubelet[2699]: E0712 00:08:37.177674 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:08:37.179402 kubelet[2699]: E0712 00:08:37.179246 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:08:37.251685 kubelet[2699]: E0712 00:08:37.251624 2699 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-047a586f92\" not found" Jul 12 00:08:37.254241 kubelet[2699]: I0712 00:08:37.254212 2699 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:08:37.254611 kubelet[2699]: I0712 00:08:37.254366 2699 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:08:37.254611 kubelet[2699]: I0712 00:08:37.254388 2699 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:37.263365 kubelet[2699]: I0712 00:08:37.263341 2699 policy_none.go:49] "None policy: Start" Jul 12 00:08:37.263691 kubelet[2699]: I0712 00:08:37.263473 2699 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:08:37.263691 kubelet[2699]: I0712 00:08:37.263491 2699 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:08:37.271769 kubelet[2699]: E0712 00:08:37.271742 2699 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:08:37.272466 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:08:37.284065 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:08:37.287762 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:08:37.295751 kubelet[2699]: E0712 00:08:37.295100 2699 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:08:37.295751 kubelet[2699]: I0712 00:08:37.295298 2699 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:08:37.295751 kubelet[2699]: I0712 00:08:37.295309 2699 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:08:37.295751 kubelet[2699]: I0712 00:08:37.295572 2699 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:08:37.296840 kubelet[2699]: E0712 00:08:37.296690 2699 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:08:37.296840 kubelet[2699]: E0712 00:08:37.296737 2699 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-n-047a586f92\" not found" Jul 12 00:08:37.355587 kubelet[2699]: E0712 00:08:37.354391 2699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-047a586f92?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Jul 12 00:08:37.396813 kubelet[2699]: I0712 00:08:37.396760 2699 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.397134 kubelet[2699]: E0712 00:08:37.397094 2699 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.485300 systemd[1]: Created slice kubepods-burstable-pod4f28d7d03c493b8dbed4771cd2f6f307.slice - libcontainer container kubepods-burstable-pod4f28d7d03c493b8dbed4771cd2f6f307.slice. Jul 12 00:08:37.492065 kubelet[2699]: E0712 00:08:37.492032 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.497807 systemd[1]: Created slice kubepods-burstable-podd568fbc8f55e5489f7f78b6260272c52.slice - libcontainer container kubepods-burstable-podd568fbc8f55e5489f7f78b6260272c52.slice. Jul 12 00:08:37.509895 kubelet[2699]: E0712 00:08:37.509722 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.511972 systemd[1]: Created slice kubepods-burstable-podd547b99baa5365d07bba6231b22f5f93.slice - libcontainer container kubepods-burstable-podd547b99baa5365d07bba6231b22f5f93.slice. Jul 12 00:08:37.514231 kubelet[2699]: E0712 00:08:37.514184 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553469 kubelet[2699]: I0712 00:08:37.553368 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d547b99baa5365d07bba6231b22f5f93-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-n-047a586f92\" (UID: \"d547b99baa5365d07bba6231b22f5f93\") " pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553469 kubelet[2699]: I0712 00:08:37.553402 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553469 kubelet[2699]: I0712 00:08:37.553421 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553469 kubelet[2699]: I0712 00:08:37.553438 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f28d7d03c493b8dbed4771cd2f6f307-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-n-047a586f92\" (UID: \"4f28d7d03c493b8dbed4771cd2f6f307\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553469 kubelet[2699]: I0712 00:08:37.553454 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f28d7d03c493b8dbed4771cd2f6f307-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-n-047a586f92\" (UID: \"4f28d7d03c493b8dbed4771cd2f6f307\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553667 kubelet[2699]: I0712 00:08:37.553470 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f28d7d03c493b8dbed4771cd2f6f307-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-n-047a586f92\" (UID: \"4f28d7d03c493b8dbed4771cd2f6f307\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553667 kubelet[2699]: I0712 00:08:37.553483 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553667 kubelet[2699]: I0712 00:08:37.553497 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.553667 kubelet[2699]: I0712 00:08:37.553512 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.599098 kubelet[2699]: I0712 00:08:37.599072 2699 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.599437 kubelet[2699]: E0712 00:08:37.599406 2699 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:37.755305 kubelet[2699]: E0712 00:08:37.755245 2699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-047a586f92?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Jul 12 00:08:37.793079 containerd[1748]: time="2025-07-12T00:08:37.793040015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-n-047a586f92,Uid:4f28d7d03c493b8dbed4771cd2f6f307,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:37.811105 containerd[1748]: time="2025-07-12T00:08:37.811071992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-n-047a586f92,Uid:d568fbc8f55e5489f7f78b6260272c52,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:37.815812 containerd[1748]: time="2025-07-12T00:08:37.815622667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-n-047a586f92,Uid:d547b99baa5365d07bba6231b22f5f93,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:38.004637 kubelet[2699]: I0712 00:08:38.004586 2699 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:38.004985 kubelet[2699]: E0712 00:08:38.004951 2699 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:38.007424 kubelet[2699]: E0712 00:08:38.007345 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:08:38.286228 kubelet[2699]: E0712 00:08:38.286107 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:08:38.552932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495066066.mount: Deactivated successfully. Jul 12 00:08:38.556035 kubelet[2699]: E0712 00:08:38.556000 2699 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-047a586f92?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="1.6s" Jul 12 00:08:38.596139 containerd[1748]: time="2025-07-12T00:08:38.596076919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:38.615449 containerd[1748]: time="2025-07-12T00:08:38.615408535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 12 00:08:38.615825 kubelet[2699]: E0712 00:08:38.615784 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-047a586f92&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:08:38.620290 containerd[1748]: time="2025-07-12T00:08:38.620231929Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:38.629051 containerd[1748]: time="2025-07-12T00:08:38.629022438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:08:38.634124 containerd[1748]: time="2025-07-12T00:08:38.634078991Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:38.642998 containerd[1748]: time="2025-07-12T00:08:38.642959580Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:38.647385 containerd[1748]: time="2025-07-12T00:08:38.647351415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:08:38.659346 containerd[1748]: time="2025-07-12T00:08:38.659291759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:38.660045 containerd[1748]: time="2025-07-12T00:08:38.659828519Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 844.133892ms" Jul 12 00:08:38.660864 containerd[1748]: time="2025-07-12T00:08:38.660824397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 867.705382ms" Jul 12 00:08:38.661796 containerd[1748]: time="2025-07-12T00:08:38.661765436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 850.631724ms" Jul 12 00:08:38.756326 kubelet[2699]: E0712 00:08:38.756282 2699 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:08:38.807588 kubelet[2699]: I0712 00:08:38.807152 2699 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:38.807588 kubelet[2699]: E0712 00:08:38.807507 2699 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:39.139563 kubelet[2699]: E0712 00:08:39.139453 2699 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 00:08:39.189407 containerd[1748]: time="2025-07-12T00:08:39.189204729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:39.189407 containerd[1748]: time="2025-07-12T00:08:39.189358009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:39.189407 containerd[1748]: time="2025-07-12T00:08:39.189383809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:39.191158 containerd[1748]: time="2025-07-12T00:08:39.190237848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:39.192486 containerd[1748]: time="2025-07-12T00:08:39.192125605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:39.192486 containerd[1748]: time="2025-07-12T00:08:39.192223325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:39.192486 containerd[1748]: time="2025-07-12T00:08:39.192301805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:39.192638 containerd[1748]: time="2025-07-12T00:08:39.192516725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:39.196585 containerd[1748]: time="2025-07-12T00:08:39.196330720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:39.196585 containerd[1748]: time="2025-07-12T00:08:39.196386840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:39.196585 containerd[1748]: time="2025-07-12T00:08:39.196401680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:39.196585 containerd[1748]: time="2025-07-12T00:08:39.196468960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:39.215873 systemd[1]: Started cri-containerd-7ce3231cc67bdfcd451454c1ec7339a2e21326048fb9bc15e4784289bc4271e3.scope - libcontainer container 7ce3231cc67bdfcd451454c1ec7339a2e21326048fb9bc15e4784289bc4271e3. Jul 12 00:08:39.221990 systemd[1]: Started cri-containerd-0eb0fcfb17d777cf8b8977ecefd8cfec1b619208259ff01788997e19f622c70f.scope - libcontainer container 0eb0fcfb17d777cf8b8977ecefd8cfec1b619208259ff01788997e19f622c70f. Jul 12 00:08:39.224171 systemd[1]: Started cri-containerd-cd0e2e592b4d8f1563eb2c96ee2f55b4f8e4327e8af04b09bdf3d8c8bd4e4b1d.scope - libcontainer container cd0e2e592b4d8f1563eb2c96ee2f55b4f8e4327e8af04b09bdf3d8c8bd4e4b1d. Jul 12 00:08:39.256074 containerd[1748]: time="2025-07-12T00:08:39.255999325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-n-047a586f92,Uid:4f28d7d03c493b8dbed4771cd2f6f307,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ce3231cc67bdfcd451454c1ec7339a2e21326048fb9bc15e4784289bc4271e3\"" Jul 12 00:08:39.269155 containerd[1748]: time="2025-07-12T00:08:39.269043468Z" level=info msg="CreateContainer within sandbox \"7ce3231cc67bdfcd451454c1ec7339a2e21326048fb9bc15e4784289bc4271e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:08:39.277672 containerd[1748]: time="2025-07-12T00:08:39.277628937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-n-047a586f92,Uid:d568fbc8f55e5489f7f78b6260272c52,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd0e2e592b4d8f1563eb2c96ee2f55b4f8e4327e8af04b09bdf3d8c8bd4e4b1d\"" Jul 12 00:08:39.283011 containerd[1748]: time="2025-07-12T00:08:39.282973410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-n-047a586f92,Uid:d547b99baa5365d07bba6231b22f5f93,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eb0fcfb17d777cf8b8977ecefd8cfec1b619208259ff01788997e19f622c70f\"" Jul 12 00:08:39.286436 containerd[1748]: time="2025-07-12T00:08:39.286394166Z" level=info msg="CreateContainer within sandbox \"cd0e2e592b4d8f1563eb2c96ee2f55b4f8e4327e8af04b09bdf3d8c8bd4e4b1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:08:39.293511 containerd[1748]: time="2025-07-12T00:08:39.293476557Z" level=info msg="CreateContainer within sandbox \"0eb0fcfb17d777cf8b8977ecefd8cfec1b619208259ff01788997e19f622c70f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:08:39.393104 containerd[1748]: time="2025-07-12T00:08:39.392887591Z" level=info msg="CreateContainer within sandbox \"7ce3231cc67bdfcd451454c1ec7339a2e21326048fb9bc15e4784289bc4271e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"915e5e097b75c42e4ac60140effce2f3dc7ea204aa1b85602dcc6f37656fc40a\"" Jul 12 00:08:39.393861 containerd[1748]: time="2025-07-12T00:08:39.393821030Z" level=info msg="StartContainer for \"915e5e097b75c42e4ac60140effce2f3dc7ea204aa1b85602dcc6f37656fc40a\"" Jul 12 00:08:39.399191 containerd[1748]: time="2025-07-12T00:08:39.399080944Z" level=info msg="CreateContainer within sandbox \"0eb0fcfb17d777cf8b8977ecefd8cfec1b619208259ff01788997e19f622c70f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5cb5e65936a1ee61720d75438b2cb440890a1dae7bc93c593f478ea6323bcffe\"" Jul 12 00:08:39.399804 containerd[1748]: time="2025-07-12T00:08:39.399667543Z" level=info msg="StartContainer for \"5cb5e65936a1ee61720d75438b2cb440890a1dae7bc93c593f478ea6323bcffe\"" Jul 12 00:08:39.403822 containerd[1748]: time="2025-07-12T00:08:39.403729258Z" level=info msg="CreateContainer within sandbox \"cd0e2e592b4d8f1563eb2c96ee2f55b4f8e4327e8af04b09bdf3d8c8bd4e4b1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c9428b22a73b011bd3b7bebd94acc640181bea734c7e9adbe1259a7a0e0bcd3\"" Jul 12 00:08:39.405252 containerd[1748]: time="2025-07-12T00:08:39.404303617Z" level=info msg="StartContainer for \"1c9428b22a73b011bd3b7bebd94acc640181bea734c7e9adbe1259a7a0e0bcd3\"" Jul 12 00:08:39.429605 systemd[1]: Started cri-containerd-915e5e097b75c42e4ac60140effce2f3dc7ea204aa1b85602dcc6f37656fc40a.scope - libcontainer container 915e5e097b75c42e4ac60140effce2f3dc7ea204aa1b85602dcc6f37656fc40a. Jul 12 00:08:39.432904 systemd[1]: Started cri-containerd-5cb5e65936a1ee61720d75438b2cb440890a1dae7bc93c593f478ea6323bcffe.scope - libcontainer container 5cb5e65936a1ee61720d75438b2cb440890a1dae7bc93c593f478ea6323bcffe. Jul 12 00:08:39.443400 systemd[1]: Started cri-containerd-1c9428b22a73b011bd3b7bebd94acc640181bea734c7e9adbe1259a7a0e0bcd3.scope - libcontainer container 1c9428b22a73b011bd3b7bebd94acc640181bea734c7e9adbe1259a7a0e0bcd3. Jul 12 00:08:39.485692 containerd[1748]: time="2025-07-12T00:08:39.485653954Z" level=info msg="StartContainer for \"915e5e097b75c42e4ac60140effce2f3dc7ea204aa1b85602dcc6f37656fc40a\" returns successfully" Jul 12 00:08:39.494129 containerd[1748]: time="2025-07-12T00:08:39.494085623Z" level=info msg="StartContainer for \"5cb5e65936a1ee61720d75438b2cb440890a1dae7bc93c593f478ea6323bcffe\" returns successfully" Jul 12 00:08:39.502841 containerd[1748]: time="2025-07-12T00:08:39.502786892Z" level=info msg="StartContainer for \"1c9428b22a73b011bd3b7bebd94acc640181bea734c7e9adbe1259a7a0e0bcd3\" returns successfully" Jul 12 00:08:40.193308 kubelet[2699]: E0712 00:08:40.192557 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:40.197062 kubelet[2699]: E0712 00:08:40.196717 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:40.199528 kubelet[2699]: E0712 00:08:40.199502 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:40.409401 kubelet[2699]: I0712 00:08:40.409369 2699 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:40.762282 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 12 00:08:41.201701 kubelet[2699]: E0712 00:08:41.201657 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:41.202094 kubelet[2699]: E0712 00:08:41.202072 2699 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:41.397903 update_engine[1689]: I20250712 00:08:41.397307 1689 update_attempter.cc:509] Updating boot flags... Jul 12 00:08:41.471285 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2994) Jul 12 00:08:41.957437 kubelet[2699]: E0712 00:08:41.957370 2699 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-n-047a586f92\" not found" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.060201 kubelet[2699]: I0712 00:08:42.060161 2699 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.060201 kubelet[2699]: E0712 00:08:42.060198 2699 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.4-n-047a586f92\": node \"ci-4081.3.4-n-047a586f92\" not found" Jul 12 00:08:42.139372 kubelet[2699]: I0712 00:08:42.139106 2699 apiserver.go:52] "Watching apiserver" Jul 12 00:08:42.146438 kubelet[2699]: I0712 00:08:42.146288 2699 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.152024 kubelet[2699]: I0712 00:08:42.151829 2699 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.152178 kubelet[2699]: I0712 00:08:42.152162 2699 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:08:42.162519 kubelet[2699]: E0712 00:08:42.162445 2699 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.163292 kubelet[2699]: E0712 00:08:42.162733 2699 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-n-047a586f92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.163292 kubelet[2699]: I0712 00:08:42.162754 2699 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.166919 kubelet[2699]: E0712 00:08:42.166771 2699 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.166919 kubelet[2699]: I0712 00:08:42.166791 2699 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.168514 kubelet[2699]: E0712 00:08:42.168490 2699 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-n-047a586f92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.212596 kubelet[2699]: I0712 00:08:42.212489 2699 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" Jul 12 00:08:42.223361 kubelet[2699]: E0712 00:08:42.223167 2699 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-n-047a586f92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" Jul 12 00:08:44.341657 systemd[1]: Reloading requested from client PID 3023 ('systemctl') (unit session-9.scope)... Jul 12 00:08:44.341673 systemd[1]: Reloading... Jul 12 00:08:44.448355 zram_generator::config[3066]: No configuration found. Jul 12 00:08:44.536085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:44.648120 systemd[1]: Reloading finished in 306 ms. Jul 12 00:08:44.685369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:44.697717 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:08:44.698110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:44.698233 systemd[1]: kubelet.service: Consumed 1.083s CPU time, 128.4M memory peak, 0B memory swap peak. Jul 12 00:08:44.706708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:44.812588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:44.824523 (kubelet)[3127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:08:44.861424 kubelet[3127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:44.861424 kubelet[3127]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:08:44.861424 kubelet[3127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:44.861424 kubelet[3127]: I0712 00:08:44.861413 3127 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:08:44.866602 kubelet[3127]: I0712 00:08:44.866568 3127 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:08:44.866682 kubelet[3127]: I0712 00:08:44.866608 3127 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:08:44.866864 kubelet[3127]: I0712 00:08:44.866842 3127 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:08:44.867990 kubelet[3127]: I0712 00:08:44.867970 3127 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 12 00:08:44.870195 kubelet[3127]: I0712 00:08:44.869990 3127 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:08:44.874580 kubelet[3127]: E0712 00:08:44.874553 3127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:08:44.874931 kubelet[3127]: I0712 00:08:44.874744 3127 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:08:44.877708 kubelet[3127]: I0712 00:08:44.877689 3127 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:08:44.878219 kubelet[3127]: I0712 00:08:44.877985 3127 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:08:44.878219 kubelet[3127]: I0712 00:08:44.878010 3127 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-n-047a586f92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:08:44.878219 kubelet[3127]: I0712 00:08:44.878148 3127 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:08:44.878219 kubelet[3127]: I0712 00:08:44.878155 3127 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:08:44.878219 kubelet[3127]: I0712 00:08:44.878194 3127 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:44.878688 kubelet[3127]: I0712 00:08:44.878593 3127 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:08:44.878688 kubelet[3127]: I0712 00:08:44.878609 3127 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:08:44.878688 kubelet[3127]: I0712 00:08:44.878636 3127 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:08:44.878688 kubelet[3127]: I0712 00:08:44.878648 3127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:08:44.880596 kubelet[3127]: I0712 00:08:44.880554 3127 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:08:44.881145 kubelet[3127]: I0712 00:08:44.881111 3127 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:08:44.884946 kubelet[3127]: I0712 00:08:44.883090 3127 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:08:44.884946 kubelet[3127]: I0712 00:08:44.883130 3127 server.go:1289] "Started kubelet" Jul 12 00:08:44.884946 kubelet[3127]: I0712 00:08:44.884641 3127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:08:44.891693 kubelet[3127]: I0712 00:08:44.890029 3127 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:08:44.893870 kubelet[3127]: I0712 00:08:44.892506 3127 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:08:44.897289 kubelet[3127]: I0712 00:08:44.897270 3127 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:08:44.897567 kubelet[3127]: E0712 00:08:44.897548 3127 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-047a586f92\" not found" Jul 12 00:08:44.897894 kubelet[3127]: I0712 00:08:44.897878 3127 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:08:44.898138 kubelet[3127]: I0712 00:08:44.898127 3127 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:08:44.899651 kubelet[3127]: I0712 00:08:44.899555 3127 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:08:44.899768 kubelet[3127]: I0712 00:08:44.899745 3127 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:08:44.899925 kubelet[3127]: I0712 00:08:44.899906 3127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:08:44.916132 kubelet[3127]: I0712 00:08:44.916107 3127 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:08:44.916390 kubelet[3127]: I0712 00:08:44.916369 3127 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:08:44.928417 kubelet[3127]: I0712 00:08:44.927958 3127 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:08:44.928932 kubelet[3127]: I0712 00:08:44.928904 3127 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:08:44.928932 kubelet[3127]: I0712 00:08:44.928931 3127 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:08:44.929002 kubelet[3127]: I0712 00:08:44.928953 3127 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:08:44.929002 kubelet[3127]: I0712 00:08:44.928960 3127 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:08:44.929054 kubelet[3127]: E0712 00:08:44.928997 3127 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:08:44.935222 kubelet[3127]: I0712 00:08:44.935190 3127 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:08:44.983778 kubelet[3127]: I0712 00:08:44.983746 3127 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:08:44.983778 kubelet[3127]: I0712 00:08:44.983766 3127 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:08:44.983778 kubelet[3127]: I0712 00:08:44.983787 3127 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:44.983968 kubelet[3127]: I0712 00:08:44.983904 3127 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:08:44.983968 kubelet[3127]: I0712 00:08:44.983914 3127 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:08:44.983968 kubelet[3127]: I0712 00:08:44.983929 3127 policy_none.go:49] "None policy: Start" Jul 12 00:08:44.983968 kubelet[3127]: I0712 00:08:44.983937 3127 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:08:44.983968 kubelet[3127]: I0712 00:08:44.983945 3127 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:08:44.984074 kubelet[3127]: I0712 00:08:44.984023 3127 state_mem.go:75] "Updated machine memory state" Jul 12 00:08:44.988511 kubelet[3127]: E0712 00:08:44.987750 3127 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:08:44.988511 kubelet[3127]: I0712 00:08:44.987919 3127 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:08:44.988511 kubelet[3127]: I0712 00:08:44.987929 3127 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:08:44.988511 kubelet[3127]: I0712 00:08:44.988465 3127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:08:44.990679 kubelet[3127]: E0712 00:08:44.989940 3127 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:08:45.030235 kubelet[3127]: I0712 00:08:45.029918 3127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.030235 kubelet[3127]: I0712 00:08:45.029971 3127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.030235 kubelet[3127]: I0712 00:08:45.030199 3127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.040653 kubelet[3127]: I0712 00:08:45.040634 3127 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 12 00:08:45.048799 kubelet[3127]: I0712 00:08:45.048780 3127 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 12 00:08:45.049053 kubelet[3127]: I0712 00:08:45.048988 3127 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 12 00:08:45.090581 kubelet[3127]: I0712 00:08:45.090413 3127 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.099843 kubelet[3127]: I0712 00:08:45.099789 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.099843 kubelet[3127]: I0712 00:08:45.099828 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.100199 kubelet[3127]: I0712 00:08:45.099849 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.100199 kubelet[3127]: I0712 00:08:45.099867 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.100199 kubelet[3127]: I0712 00:08:45.099899 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d568fbc8f55e5489f7f78b6260272c52-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" (UID: \"d568fbc8f55e5489f7f78b6260272c52\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.100199 kubelet[3127]: I0712 00:08:45.099916 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d547b99baa5365d07bba6231b22f5f93-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-n-047a586f92\" (UID: \"d547b99baa5365d07bba6231b22f5f93\") " pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.100199 kubelet[3127]: I0712 00:08:45.099939 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f28d7d03c493b8dbed4771cd2f6f307-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-n-047a586f92\" (UID: \"4f28d7d03c493b8dbed4771cd2f6f307\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.100358 kubelet[3127]: I0712 00:08:45.099973 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f28d7d03c493b8dbed4771cd2f6f307-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-n-047a586f92\" (UID: \"4f28d7d03c493b8dbed4771cd2f6f307\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.100358 kubelet[3127]: I0712 00:08:45.100121 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f28d7d03c493b8dbed4771cd2f6f307-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-n-047a586f92\" (UID: \"4f28d7d03c493b8dbed4771cd2f6f307\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.103564 kubelet[3127]: I0712 00:08:45.103522 3127 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.103673 kubelet[3127]: I0712 00:08:45.103594 3127 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.879974 kubelet[3127]: I0712 00:08:45.879930 3127 apiserver.go:52] "Watching apiserver" Jul 12 00:08:45.898563 kubelet[3127]: I0712 00:08:45.898512 3127 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:08:45.925160 kubelet[3127]: I0712 00:08:45.925019 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" podStartSLOduration=0.92498226 podStartE2EDuration="924.98226ms" podCreationTimestamp="2025-07-12 00:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:45.914443507 +0000 UTC m=+1.085153643" watchObservedRunningTime="2025-07-12 00:08:45.92498226 +0000 UTC m=+1.095692396" Jul 12 00:08:45.937799 kubelet[3127]: I0712 00:08:45.937738 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" podStartSLOduration=0.937722252 podStartE2EDuration="937.722252ms" podCreationTimestamp="2025-07-12 00:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:45.925699459 +0000 UTC m=+1.096409595" watchObservedRunningTime="2025-07-12 00:08:45.937722252 +0000 UTC m=+1.108432348" Jul 12 00:08:45.937946 kubelet[3127]: I0712 00:08:45.937926 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-n-047a586f92" podStartSLOduration=0.937919252 podStartE2EDuration="937.919252ms" podCreationTimestamp="2025-07-12 00:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:45.937118692 +0000 UTC m=+1.107828828" watchObservedRunningTime="2025-07-12 00:08:45.937919252 +0000 UTC m=+1.108629388" Jul 12 00:08:45.966217 kubelet[3127]: I0712 00:08:45.966179 3127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.966809 kubelet[3127]: I0712 00:08:45.966778 3127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.980842 kubelet[3127]: I0712 00:08:45.980805 3127 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 12 00:08:45.980986 kubelet[3127]: E0712 00:08:45.980871 3127 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-n-047a586f92\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-n-047a586f92" Jul 12 00:08:45.982134 kubelet[3127]: I0712 00:08:45.982106 3127 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 12 00:08:45.982208 kubelet[3127]: E0712 00:08:45.982146 3127 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-n-047a586f92\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-047a586f92" Jul 12 00:08:50.782189 kubelet[3127]: I0712 00:08:50.782146 3127 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:08:50.782783 containerd[1748]: time="2025-07-12T00:08:50.782441809Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:08:50.784913 kubelet[3127]: I0712 00:08:50.784428 3127 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:08:51.797867 systemd[1]: Created slice kubepods-besteffort-pod33fe31aa_5580_4b1d_a7a0_1f6ab909a15b.slice - libcontainer container kubepods-besteffort-pod33fe31aa_5580_4b1d_a7a0_1f6ab909a15b.slice. Jul 12 00:08:51.841992 kubelet[3127]: I0712 00:08:51.841948 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33fe31aa-5580-4b1d-a7a0-1f6ab909a15b-xtables-lock\") pod \"kube-proxy-qkh8p\" (UID: \"33fe31aa-5580-4b1d-a7a0-1f6ab909a15b\") " pod="kube-system/kube-proxy-qkh8p" Jul 12 00:08:51.841992 kubelet[3127]: I0712 00:08:51.841988 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33fe31aa-5580-4b1d-a7a0-1f6ab909a15b-kube-proxy\") pod \"kube-proxy-qkh8p\" (UID: \"33fe31aa-5580-4b1d-a7a0-1f6ab909a15b\") " pod="kube-system/kube-proxy-qkh8p" Jul 12 00:08:51.842361 kubelet[3127]: I0712 00:08:51.842006 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33fe31aa-5580-4b1d-a7a0-1f6ab909a15b-lib-modules\") pod \"kube-proxy-qkh8p\" (UID: \"33fe31aa-5580-4b1d-a7a0-1f6ab909a15b\") " pod="kube-system/kube-proxy-qkh8p" Jul 12 00:08:51.842361 kubelet[3127]: I0712 00:08:51.842032 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks667\" (UniqueName: \"kubernetes.io/projected/33fe31aa-5580-4b1d-a7a0-1f6ab909a15b-kube-api-access-ks667\") pod \"kube-proxy-qkh8p\" (UID: \"33fe31aa-5580-4b1d-a7a0-1f6ab909a15b\") " pod="kube-system/kube-proxy-qkh8p" Jul 12 00:08:52.030896 systemd[1]: Created slice kubepods-besteffort-pod08d9e574_0c8e_4ea6_ac6f_4a5991a2b170.slice - libcontainer container kubepods-besteffort-pod08d9e574_0c8e_4ea6_ac6f_4a5991a2b170.slice. Jul 12 00:08:52.043515 kubelet[3127]: I0712 00:08:52.043053 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08d9e574-0c8e-4ea6-ac6f-4a5991a2b170-var-lib-calico\") pod \"tigera-operator-747864d56d-mgcr5\" (UID: \"08d9e574-0c8e-4ea6-ac6f-4a5991a2b170\") " pod="tigera-operator/tigera-operator-747864d56d-mgcr5" Jul 12 00:08:52.043515 kubelet[3127]: I0712 00:08:52.043105 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whncb\" (UniqueName: \"kubernetes.io/projected/08d9e574-0c8e-4ea6-ac6f-4a5991a2b170-kube-api-access-whncb\") pod \"tigera-operator-747864d56d-mgcr5\" (UID: \"08d9e574-0c8e-4ea6-ac6f-4a5991a2b170\") " pod="tigera-operator/tigera-operator-747864d56d-mgcr5" Jul 12 00:08:52.109699 containerd[1748]: time="2025-07-12T00:08:52.109505324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qkh8p,Uid:33fe31aa-5580-4b1d-a7a0-1f6ab909a15b,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:52.163034 containerd[1748]: time="2025-07-12T00:08:52.162125144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:52.163034 containerd[1748]: time="2025-07-12T00:08:52.162605983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:52.163034 containerd[1748]: time="2025-07-12T00:08:52.162618383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:52.163408 containerd[1748]: time="2025-07-12T00:08:52.162938863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:52.185409 systemd[1]: Started cri-containerd-5597b71507e2453fe189efa70787f579e101cb16324ecca35c2cca2cebe2885a.scope - libcontainer container 5597b71507e2453fe189efa70787f579e101cb16324ecca35c2cca2cebe2885a. Jul 12 00:08:52.203083 containerd[1748]: time="2025-07-12T00:08:52.203041937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qkh8p,Uid:33fe31aa-5580-4b1d-a7a0-1f6ab909a15b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5597b71507e2453fe189efa70787f579e101cb16324ecca35c2cca2cebe2885a\"" Jul 12 00:08:52.212969 containerd[1748]: time="2025-07-12T00:08:52.212936806Z" level=info msg="CreateContainer within sandbox \"5597b71507e2453fe189efa70787f579e101cb16324ecca35c2cca2cebe2885a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:08:52.275908 containerd[1748]: time="2025-07-12T00:08:52.275864934Z" level=info msg="CreateContainer within sandbox \"5597b71507e2453fe189efa70787f579e101cb16324ecca35c2cca2cebe2885a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99eae4e42de6d07f5a664a1009f09ac165e993fd4ab16096d4db25684ebadf0a\"" Jul 12 00:08:52.277708 containerd[1748]: time="2025-07-12T00:08:52.276591373Z" level=info msg="StartContainer for \"99eae4e42de6d07f5a664a1009f09ac165e993fd4ab16096d4db25684ebadf0a\"" Jul 12 00:08:52.298564 systemd[1]: Started cri-containerd-99eae4e42de6d07f5a664a1009f09ac165e993fd4ab16096d4db25684ebadf0a.scope - libcontainer container 99eae4e42de6d07f5a664a1009f09ac165e993fd4ab16096d4db25684ebadf0a. Jul 12 00:08:52.334311 containerd[1748]: time="2025-07-12T00:08:52.334250827Z" level=info msg="StartContainer for \"99eae4e42de6d07f5a664a1009f09ac165e993fd4ab16096d4db25684ebadf0a\" returns successfully" Jul 12 00:08:52.336783 containerd[1748]: time="2025-07-12T00:08:52.336739064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mgcr5,Uid:08d9e574-0c8e-4ea6-ac6f-4a5991a2b170,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:08:52.402468 containerd[1748]: time="2025-07-12T00:08:52.402248349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:52.403541 containerd[1748]: time="2025-07-12T00:08:52.402337509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:52.403541 containerd[1748]: time="2025-07-12T00:08:52.403148188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:52.403541 containerd[1748]: time="2025-07-12T00:08:52.403310588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:52.420556 systemd[1]: Started cri-containerd-10344256964d601047160b812b7df49a93afa38d365a7de902f530aa6f12fb85.scope - libcontainer container 10344256964d601047160b812b7df49a93afa38d365a7de902f530aa6f12fb85. Jul 12 00:08:52.466361 containerd[1748]: time="2025-07-12T00:08:52.466284196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mgcr5,Uid:08d9e574-0c8e-4ea6-ac6f-4a5991a2b170,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10344256964d601047160b812b7df49a93afa38d365a7de902f530aa6f12fb85\"" Jul 12 00:08:52.470365 containerd[1748]: time="2025-07-12T00:08:52.470318591Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:08:53.911470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204974135.mount: Deactivated successfully. Jul 12 00:08:54.918283 containerd[1748]: time="2025-07-12T00:08:54.916043711Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:54.922662 containerd[1748]: time="2025-07-12T00:08:54.922620304Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:08:54.926059 containerd[1748]: time="2025-07-12T00:08:54.926024260Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:54.932162 containerd[1748]: time="2025-07-12T00:08:54.932116093Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:54.933279 containerd[1748]: time="2025-07-12T00:08:54.932622932Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.462272381s" Jul 12 00:08:54.933279 containerd[1748]: time="2025-07-12T00:08:54.932652692Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:08:54.941541 containerd[1748]: time="2025-07-12T00:08:54.941505242Z" level=info msg="CreateContainer within sandbox \"10344256964d601047160b812b7df49a93afa38d365a7de902f530aa6f12fb85\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:08:54.944926 kubelet[3127]: I0712 00:08:54.944375 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qkh8p" podStartSLOduration=3.944357439 podStartE2EDuration="3.944357439s" podCreationTimestamp="2025-07-12 00:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:52.994129791 +0000 UTC m=+8.164839887" watchObservedRunningTime="2025-07-12 00:08:54.944357439 +0000 UTC m=+10.115067575" Jul 12 00:08:54.991483 containerd[1748]: time="2025-07-12T00:08:54.991365945Z" level=info msg="CreateContainer within sandbox \"10344256964d601047160b812b7df49a93afa38d365a7de902f530aa6f12fb85\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665\"" Jul 12 00:08:54.992543 containerd[1748]: time="2025-07-12T00:08:54.991727945Z" level=info msg="StartContainer for \"f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665\"" Jul 12 00:08:55.022497 systemd[1]: Started cri-containerd-f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665.scope - libcontainer container f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665. Jul 12 00:08:55.049558 containerd[1748]: time="2025-07-12T00:08:55.049519598Z" level=info msg="StartContainer for \"f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665\" returns successfully" Jul 12 00:08:57.885519 systemd[1]: cri-containerd-f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665.scope: Deactivated successfully. Jul 12 00:08:57.914327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665-rootfs.mount: Deactivated successfully. Jul 12 00:08:58.487279 containerd[1748]: time="2025-07-12T00:08:58.487124583Z" level=info msg="shim disconnected" id=f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665 namespace=k8s.io Jul 12 00:08:58.487279 containerd[1748]: time="2025-07-12T00:08:58.487194543Z" level=warning msg="cleaning up after shim disconnected" id=f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665 namespace=k8s.io Jul 12 00:08:58.487279 containerd[1748]: time="2025-07-12T00:08:58.487204143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:58.994937 kubelet[3127]: I0712 00:08:58.994829 3127 scope.go:117] "RemoveContainer" containerID="f0cadce709fff1fd0730f8d9348f3f942aa72c32ad1c816b3205d2f03b8a3665" Jul 12 00:08:58.998892 containerd[1748]: time="2025-07-12T00:08:58.998783558Z" level=info msg="CreateContainer within sandbox \"10344256964d601047160b812b7df49a93afa38d365a7de902f530aa6f12fb85\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:08:59.041203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403006507.mount: Deactivated successfully. Jul 12 00:08:59.055038 containerd[1748]: time="2025-07-12T00:08:59.054994093Z" level=info msg="CreateContainer within sandbox \"10344256964d601047160b812b7df49a93afa38d365a7de902f530aa6f12fb85\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cf432081683c76f004b524984a824ed720cadc2df74052fe4cae3294f3a59f74\"" Jul 12 00:08:59.055508 containerd[1748]: time="2025-07-12T00:08:59.055482733Z" level=info msg="StartContainer for \"cf432081683c76f004b524984a824ed720cadc2df74052fe4cae3294f3a59f74\"" Jul 12 00:08:59.085507 systemd[1]: Started cri-containerd-cf432081683c76f004b524984a824ed720cadc2df74052fe4cae3294f3a59f74.scope - libcontainer container cf432081683c76f004b524984a824ed720cadc2df74052fe4cae3294f3a59f74. Jul 12 00:08:59.110170 containerd[1748]: time="2025-07-12T00:08:59.110124190Z" level=info msg="StartContainer for \"cf432081683c76f004b524984a824ed720cadc2df74052fe4cae3294f3a59f74\" returns successfully" Jul 12 00:09:00.010536 kubelet[3127]: I0712 00:09:00.010378 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-mgcr5" podStartSLOduration=6.544689978 podStartE2EDuration="9.010364876s" podCreationTimestamp="2025-07-12 00:08:51 +0000 UTC" firstStartedPulling="2025-07-12 00:08:52.469312792 +0000 UTC m=+7.640022888" lastFinishedPulling="2025-07-12 00:08:54.93498765 +0000 UTC m=+10.105697786" observedRunningTime="2025-07-12 00:08:56.00016323 +0000 UTC m=+11.170873366" watchObservedRunningTime="2025-07-12 00:09:00.010364876 +0000 UTC m=+15.181075012" Jul 12 00:09:00.987966 sudo[2204]: pam_unix(sudo:session): session closed for user root Jul 12 00:09:01.063484 sshd[2201]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:01.067390 systemd[1]: sshd@6-10.200.20.17:22-10.200.16.10:41010.service: Deactivated successfully. Jul 12 00:09:01.069624 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:09:01.070480 systemd[1]: session-9.scope: Consumed 6.467s CPU time, 152.9M memory peak, 0B memory swap peak. Jul 12 00:09:01.071301 systemd-logind[1686]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:09:01.072246 systemd-logind[1686]: Removed session 9. Jul 12 00:09:10.524383 systemd[1]: Created slice kubepods-besteffort-pod5b04e65a_5f15_4280_9f6e_c58dea84f606.slice - libcontainer container kubepods-besteffort-pod5b04e65a_5f15_4280_9f6e_c58dea84f606.slice. Jul 12 00:09:10.557341 kubelet[3127]: I0712 00:09:10.557184 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b04e65a-5f15-4280-9f6e-c58dea84f606-typha-certs\") pod \"calico-typha-7559d57d45-cfb4k\" (UID: \"5b04e65a-5f15-4280-9f6e-c58dea84f606\") " pod="calico-system/calico-typha-7559d57d45-cfb4k" Jul 12 00:09:10.557341 kubelet[3127]: I0712 00:09:10.557228 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b04e65a-5f15-4280-9f6e-c58dea84f606-tigera-ca-bundle\") pod \"calico-typha-7559d57d45-cfb4k\" (UID: \"5b04e65a-5f15-4280-9f6e-c58dea84f606\") " pod="calico-system/calico-typha-7559d57d45-cfb4k" Jul 12 00:09:10.557341 kubelet[3127]: I0712 00:09:10.557251 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmmln\" (UniqueName: \"kubernetes.io/projected/5b04e65a-5f15-4280-9f6e-c58dea84f606-kube-api-access-jmmln\") pod \"calico-typha-7559d57d45-cfb4k\" (UID: \"5b04e65a-5f15-4280-9f6e-c58dea84f606\") " pod="calico-system/calico-typha-7559d57d45-cfb4k" Jul 12 00:09:10.739424 systemd[1]: Created slice kubepods-besteffort-podb457ab52_7047_4e86_b2df_a5b416c8f41c.slice - libcontainer container kubepods-besteffort-podb457ab52_7047_4e86_b2df_a5b416c8f41c.slice. Jul 12 00:09:10.758928 kubelet[3127]: I0712 00:09:10.758632 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-cni-net-dir\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.758928 kubelet[3127]: I0712 00:09:10.758667 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-var-lib-calico\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.758928 kubelet[3127]: I0712 00:09:10.758686 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-var-run-calico\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.758928 kubelet[3127]: I0712 00:09:10.758703 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-flexvol-driver-host\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.758928 kubelet[3127]: I0712 00:09:10.758719 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-xtables-lock\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.759168 kubelet[3127]: I0712 00:09:10.758735 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-cni-bin-dir\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.759168 kubelet[3127]: I0712 00:09:10.758751 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-lib-modules\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.759168 kubelet[3127]: I0712 00:09:10.758766 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b457ab52-7047-4e86-b2df-a5b416c8f41c-node-certs\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.759168 kubelet[3127]: I0712 00:09:10.758781 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b457ab52-7047-4e86-b2df-a5b416c8f41c-tigera-ca-bundle\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.759168 kubelet[3127]: I0712 00:09:10.758799 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-cni-log-dir\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.759341 kubelet[3127]: I0712 00:09:10.758812 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b457ab52-7047-4e86-b2df-a5b416c8f41c-policysync\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.759341 kubelet[3127]: I0712 00:09:10.758828 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsgs\" (UniqueName: \"kubernetes.io/projected/b457ab52-7047-4e86-b2df-a5b416c8f41c-kube-api-access-zpsgs\") pod \"calico-node-vrbfc\" (UID: \"b457ab52-7047-4e86-b2df-a5b416c8f41c\") " pod="calico-system/calico-node-vrbfc" Jul 12 00:09:10.836538 containerd[1748]: time="2025-07-12T00:09:10.836005135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7559d57d45-cfb4k,Uid:5b04e65a-5f15-4280-9f6e-c58dea84f606,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:10.862059 kubelet[3127]: E0712 00:09:10.861178 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.862059 kubelet[3127]: W0712 00:09:10.861203 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.862059 kubelet[3127]: E0712 00:09:10.861223 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.862059 kubelet[3127]: E0712 00:09:10.861881 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.862059 kubelet[3127]: W0712 00:09:10.861893 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.862059 kubelet[3127]: E0712 00:09:10.861906 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.863895 kubelet[3127]: E0712 00:09:10.863349 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.863895 kubelet[3127]: W0712 00:09:10.863369 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.863895 kubelet[3127]: E0712 00:09:10.863383 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.863895 kubelet[3127]: E0712 00:09:10.863632 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.863895 kubelet[3127]: W0712 00:09:10.863640 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.863895 kubelet[3127]: E0712 00:09:10.863650 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.863895 kubelet[3127]: E0712 00:09:10.863894 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.863895 kubelet[3127]: W0712 00:09:10.863904 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.864807 kubelet[3127]: E0712 00:09:10.863915 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.867199 kubelet[3127]: E0712 00:09:10.865386 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.867199 kubelet[3127]: W0712 00:09:10.865402 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.867199 kubelet[3127]: E0712 00:09:10.865414 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.867914 kubelet[3127]: E0712 00:09:10.867883 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.867914 kubelet[3127]: W0712 00:09:10.867900 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.867914 kubelet[3127]: E0712 00:09:10.867913 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.868138 kubelet[3127]: E0712 00:09:10.868109 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.868138 kubelet[3127]: W0712 00:09:10.868124 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.868138 kubelet[3127]: E0712 00:09:10.868136 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.868575 kubelet[3127]: E0712 00:09:10.868556 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.868691 kubelet[3127]: W0712 00:09:10.868572 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.868691 kubelet[3127]: E0712 00:09:10.868684 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.869198 kubelet[3127]: E0712 00:09:10.869052 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.869198 kubelet[3127]: W0712 00:09:10.869062 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.869198 kubelet[3127]: E0712 00:09:10.869073 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.869615 kubelet[3127]: E0712 00:09:10.869446 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.869615 kubelet[3127]: W0712 00:09:10.869462 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.869615 kubelet[3127]: E0712 00:09:10.869473 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.871278 kubelet[3127]: E0712 00:09:10.869839 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.871278 kubelet[3127]: W0712 00:09:10.869853 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.871278 kubelet[3127]: E0712 00:09:10.869863 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.871278 kubelet[3127]: E0712 00:09:10.870113 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.871278 kubelet[3127]: W0712 00:09:10.870122 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.871278 kubelet[3127]: E0712 00:09:10.870131 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.871278 kubelet[3127]: E0712 00:09:10.870364 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.871278 kubelet[3127]: W0712 00:09:10.870373 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.871278 kubelet[3127]: E0712 00:09:10.870382 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.871278 kubelet[3127]: E0712 00:09:10.870712 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.871576 kubelet[3127]: W0712 00:09:10.870721 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.871576 kubelet[3127]: E0712 00:09:10.870732 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.871576 kubelet[3127]: E0712 00:09:10.870985 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.871576 kubelet[3127]: W0712 00:09:10.870994 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.871576 kubelet[3127]: E0712 00:09:10.871003 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.871576 kubelet[3127]: E0712 00:09:10.871417 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.871576 kubelet[3127]: W0712 00:09:10.871428 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.871576 kubelet[3127]: E0712 00:09:10.871438 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.871775 kubelet[3127]: E0712 00:09:10.871699 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.871775 kubelet[3127]: W0712 00:09:10.871709 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.871775 kubelet[3127]: E0712 00:09:10.871719 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.872684 kubelet[3127]: E0712 00:09:10.872132 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.872684 kubelet[3127]: W0712 00:09:10.872150 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.872684 kubelet[3127]: E0712 00:09:10.872162 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.872684 kubelet[3127]: E0712 00:09:10.872333 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.872684 kubelet[3127]: W0712 00:09:10.872341 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.872684 kubelet[3127]: E0712 00:09:10.872350 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.873200 kubelet[3127]: E0712 00:09:10.872929 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.873200 kubelet[3127]: W0712 00:09:10.872947 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.873200 kubelet[3127]: E0712 00:09:10.872958 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.874007 kubelet[3127]: E0712 00:09:10.873843 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.874007 kubelet[3127]: W0712 00:09:10.873869 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.874007 kubelet[3127]: E0712 00:09:10.873883 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.898311 kubelet[3127]: E0712 00:09:10.898197 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26mjl" podUID="0465de75-2781-421a-b1c8-807d08b402b9" Jul 12 00:09:10.912942 containerd[1748]: time="2025-07-12T00:09:10.912657282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:10.912942 containerd[1748]: time="2025-07-12T00:09:10.912740122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:10.912942 containerd[1748]: time="2025-07-12T00:09:10.912759962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:10.912942 containerd[1748]: time="2025-07-12T00:09:10.912861442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:10.935482 kubelet[3127]: E0712 00:09:10.934830 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.935482 kubelet[3127]: W0712 00:09:10.934853 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.935482 kubelet[3127]: E0712 00:09:10.934873 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.941491 kubelet[3127]: E0712 00:09:10.941470 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.941625 kubelet[3127]: W0712 00:09:10.941611 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.941709 kubelet[3127]: E0712 00:09:10.941697 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.942160 kubelet[3127]: E0712 00:09:10.942143 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.943725 kubelet[3127]: W0712 00:09:10.942435 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.943725 kubelet[3127]: E0712 00:09:10.942508 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.944346 kubelet[3127]: E0712 00:09:10.944195 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.944346 kubelet[3127]: W0712 00:09:10.944209 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.944346 kubelet[3127]: E0712 00:09:10.944222 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.944658 kubelet[3127]: E0712 00:09:10.944483 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.945124 kubelet[3127]: W0712 00:09:10.944706 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.945124 kubelet[3127]: E0712 00:09:10.944727 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.945487 systemd[1]: Started cri-containerd-1d839197e5e9d170a7cd6ec88106adddf7fa8a17cb70ce724a5d645e988afaec.scope - libcontainer container 1d839197e5e9d170a7cd6ec88106adddf7fa8a17cb70ce724a5d645e988afaec. Jul 12 00:09:10.946022 kubelet[3127]: E0712 00:09:10.945791 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.946022 kubelet[3127]: W0712 00:09:10.945803 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.946022 kubelet[3127]: E0712 00:09:10.945815 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.946903 kubelet[3127]: E0712 00:09:10.946614 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.946903 kubelet[3127]: W0712 00:09:10.946628 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.946903 kubelet[3127]: E0712 00:09:10.946640 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.947417 kubelet[3127]: E0712 00:09:10.947401 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.947500 kubelet[3127]: W0712 00:09:10.947488 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.947561 kubelet[3127]: E0712 00:09:10.947542 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.948396 kubelet[3127]: E0712 00:09:10.948350 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.948396 kubelet[3127]: W0712 00:09:10.948365 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.948908 kubelet[3127]: E0712 00:09:10.948377 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.950162 kubelet[3127]: E0712 00:09:10.949801 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.950162 kubelet[3127]: W0712 00:09:10.949814 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.950162 kubelet[3127]: E0712 00:09:10.949926 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.951188 kubelet[3127]: E0712 00:09:10.950997 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.951188 kubelet[3127]: W0712 00:09:10.951011 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.951188 kubelet[3127]: E0712 00:09:10.951023 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.954207 kubelet[3127]: E0712 00:09:10.954191 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.954569 kubelet[3127]: W0712 00:09:10.954360 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.954569 kubelet[3127]: E0712 00:09:10.954400 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.955308 kubelet[3127]: E0712 00:09:10.955159 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.955468 kubelet[3127]: W0712 00:09:10.955173 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.955468 kubelet[3127]: E0712 00:09:10.955420 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.956220 kubelet[3127]: E0712 00:09:10.956074 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.956220 kubelet[3127]: W0712 00:09:10.956100 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.956220 kubelet[3127]: E0712 00:09:10.956113 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.956722 kubelet[3127]: E0712 00:09:10.956687 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.956979 kubelet[3127]: W0712 00:09:10.956807 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.956979 kubelet[3127]: E0712 00:09:10.956832 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.957754 kubelet[3127]: E0712 00:09:10.957468 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.957754 kubelet[3127]: W0712 00:09:10.957485 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.957754 kubelet[3127]: E0712 00:09:10.957498 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.958440 kubelet[3127]: E0712 00:09:10.958187 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.958440 kubelet[3127]: W0712 00:09:10.958201 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.958440 kubelet[3127]: E0712 00:09:10.958283 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.959745 kubelet[3127]: E0712 00:09:10.959072 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.959745 kubelet[3127]: W0712 00:09:10.959668 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.959745 kubelet[3127]: E0712 00:09:10.959693 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.960435 kubelet[3127]: E0712 00:09:10.960247 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.960435 kubelet[3127]: W0712 00:09:10.960297 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.960435 kubelet[3127]: E0712 00:09:10.960310 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.960784 kubelet[3127]: E0712 00:09:10.960695 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.960784 kubelet[3127]: W0712 00:09:10.960707 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.960784 kubelet[3127]: E0712 00:09:10.960718 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.961674 kubelet[3127]: E0712 00:09:10.961486 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.961674 kubelet[3127]: W0712 00:09:10.961504 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.961674 kubelet[3127]: E0712 00:09:10.961517 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.962095 kubelet[3127]: E0712 00:09:10.962070 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.962341 kubelet[3127]: W0712 00:09:10.962295 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.962341 kubelet[3127]: E0712 00:09:10.962314 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.962506 kubelet[3127]: I0712 00:09:10.962384 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nfhm\" (UniqueName: \"kubernetes.io/projected/0465de75-2781-421a-b1c8-807d08b402b9-kube-api-access-6nfhm\") pod \"csi-node-driver-26mjl\" (UID: \"0465de75-2781-421a-b1c8-807d08b402b9\") " pod="calico-system/csi-node-driver-26mjl" Jul 12 00:09:10.963394 kubelet[3127]: E0712 00:09:10.963307 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.963394 kubelet[3127]: W0712 00:09:10.963323 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.963394 kubelet[3127]: E0712 00:09:10.963336 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.963394 kubelet[3127]: I0712 00:09:10.963364 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0465de75-2781-421a-b1c8-807d08b402b9-kubelet-dir\") pod \"csi-node-driver-26mjl\" (UID: \"0465de75-2781-421a-b1c8-807d08b402b9\") " pod="calico-system/csi-node-driver-26mjl" Jul 12 00:09:10.963556 kubelet[3127]: E0712 00:09:10.963546 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.963556 kubelet[3127]: W0712 00:09:10.963555 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.963606 kubelet[3127]: E0712 00:09:10.963564 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.963606 kubelet[3127]: I0712 00:09:10.963581 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0465de75-2781-421a-b1c8-807d08b402b9-varrun\") pod \"csi-node-driver-26mjl\" (UID: \"0465de75-2781-421a-b1c8-807d08b402b9\") " pod="calico-system/csi-node-driver-26mjl" Jul 12 00:09:10.963908 kubelet[3127]: E0712 00:09:10.963718 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.963908 kubelet[3127]: W0712 00:09:10.963736 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.963908 kubelet[3127]: E0712 00:09:10.963746 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.963908 kubelet[3127]: I0712 00:09:10.963763 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0465de75-2781-421a-b1c8-807d08b402b9-registration-dir\") pod \"csi-node-driver-26mjl\" (UID: \"0465de75-2781-421a-b1c8-807d08b402b9\") " pod="calico-system/csi-node-driver-26mjl" Jul 12 00:09:10.964359 kubelet[3127]: E0712 00:09:10.964224 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.964359 kubelet[3127]: W0712 00:09:10.964237 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.964516 kubelet[3127]: E0712 00:09:10.964459 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.965470 kubelet[3127]: E0712 00:09:10.965346 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.965470 kubelet[3127]: W0712 00:09:10.965362 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.965470 kubelet[3127]: E0712 00:09:10.965375 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.965758 kubelet[3127]: E0712 00:09:10.965745 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.965855 kubelet[3127]: W0712 00:09:10.965842 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.965947 kubelet[3127]: E0712 00:09:10.965934 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.966655 kubelet[3127]: E0712 00:09:10.966303 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.966655 kubelet[3127]: W0712 00:09:10.966526 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.966655 kubelet[3127]: E0712 00:09:10.966542 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.966655 kubelet[3127]: I0712 00:09:10.966590 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0465de75-2781-421a-b1c8-807d08b402b9-socket-dir\") pod \"csi-node-driver-26mjl\" (UID: \"0465de75-2781-421a-b1c8-807d08b402b9\") " pod="calico-system/csi-node-driver-26mjl" Jul 12 00:09:10.967010 kubelet[3127]: E0712 00:09:10.966996 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.967108 kubelet[3127]: W0712 00:09:10.967096 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.967294 kubelet[3127]: E0712 00:09:10.967178 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.967785 kubelet[3127]: E0712 00:09:10.967566 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.968625 kubelet[3127]: W0712 00:09:10.968499 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.968625 kubelet[3127]: E0712 00:09:10.968525 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.968813 kubelet[3127]: E0712 00:09:10.968800 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.968873 kubelet[3127]: W0712 00:09:10.968863 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.969032 kubelet[3127]: E0712 00:09:10.969016 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.969939 kubelet[3127]: E0712 00:09:10.969862 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.970347 kubelet[3127]: W0712 00:09:10.970039 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.970347 kubelet[3127]: E0712 00:09:10.970057 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.970977 kubelet[3127]: E0712 00:09:10.970963 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.971042 kubelet[3127]: W0712 00:09:10.971031 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.971708 kubelet[3127]: E0712 00:09:10.971584 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.971864 kubelet[3127]: E0712 00:09:10.971852 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.971966 kubelet[3127]: W0712 00:09:10.971922 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.973416 kubelet[3127]: E0712 00:09:10.973159 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:10.973828 kubelet[3127]: E0712 00:09:10.973765 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:10.973828 kubelet[3127]: W0712 00:09:10.973779 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:10.973828 kubelet[3127]: E0712 00:09:10.973791 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.027704 containerd[1748]: time="2025-07-12T00:09:11.027602383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7559d57d45-cfb4k,Uid:5b04e65a-5f15-4280-9f6e-c58dea84f606,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d839197e5e9d170a7cd6ec88106adddf7fa8a17cb70ce724a5d645e988afaec\"" Jul 12 00:09:11.030595 containerd[1748]: time="2025-07-12T00:09:11.030411460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:09:11.043096 containerd[1748]: time="2025-07-12T00:09:11.042774725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrbfc,Uid:b457ab52-7047-4e86-b2df-a5b416c8f41c,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:11.074723 kubelet[3127]: E0712 00:09:11.074691 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.074723 kubelet[3127]: W0712 00:09:11.074716 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.075012 kubelet[3127]: E0712 00:09:11.074736 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.075119 kubelet[3127]: E0712 00:09:11.075103 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.075119 kubelet[3127]: W0712 00:09:11.075116 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.075232 kubelet[3127]: E0712 00:09:11.075126 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.075388 kubelet[3127]: E0712 00:09:11.075357 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.075388 kubelet[3127]: W0712 00:09:11.075370 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.075388 kubelet[3127]: E0712 00:09:11.075381 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.075565 kubelet[3127]: E0712 00:09:11.075549 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.075565 kubelet[3127]: W0712 00:09:11.075562 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.075753 kubelet[3127]: E0712 00:09:11.075571 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.075863 kubelet[3127]: E0712 00:09:11.075846 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.075863 kubelet[3127]: W0712 00:09:11.075860 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.076088 kubelet[3127]: E0712 00:09:11.075871 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.076188 kubelet[3127]: E0712 00:09:11.076174 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.076327 kubelet[3127]: W0712 00:09:11.076226 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.076327 kubelet[3127]: E0712 00:09:11.076243 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.076687 kubelet[3127]: E0712 00:09:11.076664 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.076687 kubelet[3127]: W0712 00:09:11.076681 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.076789 kubelet[3127]: E0712 00:09:11.076693 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.076883 kubelet[3127]: E0712 00:09:11.076867 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.076883 kubelet[3127]: W0712 00:09:11.076880 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.076992 kubelet[3127]: E0712 00:09:11.076890 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.077165 kubelet[3127]: E0712 00:09:11.077150 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.077165 kubelet[3127]: W0712 00:09:11.077163 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.077301 kubelet[3127]: E0712 00:09:11.077172 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.077372 kubelet[3127]: E0712 00:09:11.077356 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.077372 kubelet[3127]: W0712 00:09:11.077370 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.077564 kubelet[3127]: E0712 00:09:11.077380 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.077564 kubelet[3127]: E0712 00:09:11.077541 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.077564 kubelet[3127]: W0712 00:09:11.077549 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.077564 kubelet[3127]: E0712 00:09:11.077557 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.077708 kubelet[3127]: E0712 00:09:11.077692 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.077708 kubelet[3127]: W0712 00:09:11.077699 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.077708 kubelet[3127]: E0712 00:09:11.077707 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.078035 kubelet[3127]: E0712 00:09:11.078017 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.078035 kubelet[3127]: W0712 00:09:11.078032 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.078035 kubelet[3127]: E0712 00:09:11.078043 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.078320 kubelet[3127]: E0712 00:09:11.078290 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.078417 kubelet[3127]: W0712 00:09:11.078332 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.078417 kubelet[3127]: E0712 00:09:11.078345 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.078593 kubelet[3127]: E0712 00:09:11.078554 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.078593 kubelet[3127]: W0712 00:09:11.078585 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.078709 kubelet[3127]: E0712 00:09:11.078596 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.078829 kubelet[3127]: E0712 00:09:11.078784 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.078829 kubelet[3127]: W0712 00:09:11.078798 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.079048 kubelet[3127]: E0712 00:09:11.078846 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.079048 kubelet[3127]: E0712 00:09:11.079044 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.079114 kubelet[3127]: W0712 00:09:11.079052 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.079114 kubelet[3127]: E0712 00:09:11.079064 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.079420 kubelet[3127]: E0712 00:09:11.079360 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.079420 kubelet[3127]: W0712 00:09:11.079375 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.079420 kubelet[3127]: E0712 00:09:11.079386 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.079619 kubelet[3127]: E0712 00:09:11.079602 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.079619 kubelet[3127]: W0712 00:09:11.079616 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.079685 kubelet[3127]: E0712 00:09:11.079627 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.079860 kubelet[3127]: E0712 00:09:11.079841 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.079860 kubelet[3127]: W0712 00:09:11.079854 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.079978 kubelet[3127]: E0712 00:09:11.079863 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.080093 kubelet[3127]: E0712 00:09:11.080076 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.080093 kubelet[3127]: W0712 00:09:11.080091 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.080160 kubelet[3127]: E0712 00:09:11.080103 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.080602 kubelet[3127]: E0712 00:09:11.080463 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.080602 kubelet[3127]: W0712 00:09:11.080477 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.080602 kubelet[3127]: E0712 00:09:11.080490 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.080785 kubelet[3127]: E0712 00:09:11.080775 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.081010 kubelet[3127]: W0712 00:09:11.080827 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.081010 kubelet[3127]: E0712 00:09:11.080841 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.081146 kubelet[3127]: E0712 00:09:11.081126 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.081146 kubelet[3127]: W0712 00:09:11.081142 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.081218 kubelet[3127]: E0712 00:09:11.081155 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.082319 kubelet[3127]: E0712 00:09:11.081362 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.082319 kubelet[3127]: W0712 00:09:11.081377 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.082319 kubelet[3127]: E0712 00:09:11.081389 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.099623 kubelet[3127]: E0712 00:09:11.098656 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:11.099623 kubelet[3127]: W0712 00:09:11.098679 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:11.101184 kubelet[3127]: E0712 00:09:11.100367 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:11.117537 containerd[1748]: time="2025-07-12T00:09:11.117339315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:11.117537 containerd[1748]: time="2025-07-12T00:09:11.117401995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:11.117537 containerd[1748]: time="2025-07-12T00:09:11.117417195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.118214 containerd[1748]: time="2025-07-12T00:09:11.117507795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:11.145211 systemd[1]: Started cri-containerd-cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a.scope - libcontainer container cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a. Jul 12 00:09:11.187363 containerd[1748]: time="2025-07-12T00:09:11.187310751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrbfc,Uid:b457ab52-7047-4e86-b2df-a5b416c8f41c,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a\"" Jul 12 00:09:12.414851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631751256.mount: Deactivated successfully. Jul 12 00:09:12.905182 containerd[1748]: time="2025-07-12T00:09:12.905126518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:12.910477 containerd[1748]: time="2025-07-12T00:09:12.910435072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:09:12.914247 containerd[1748]: time="2025-07-12T00:09:12.914214507Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:12.919377 containerd[1748]: time="2025-07-12T00:09:12.919310461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:12.920163 containerd[1748]: time="2025-07-12T00:09:12.920044860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.88959904s" Jul 12 00:09:12.920163 containerd[1748]: time="2025-07-12T00:09:12.920076220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:09:12.922323 containerd[1748]: time="2025-07-12T00:09:12.922220697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:09:12.935191 kubelet[3127]: E0712 00:09:12.935159 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26mjl" podUID="0465de75-2781-421a-b1c8-807d08b402b9" Jul 12 00:09:12.945040 containerd[1748]: time="2025-07-12T00:09:12.944990110Z" level=info msg="CreateContainer within sandbox \"1d839197e5e9d170a7cd6ec88106adddf7fa8a17cb70ce724a5d645e988afaec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:09:13.013414 containerd[1748]: time="2025-07-12T00:09:13.013369187Z" level=info msg="CreateContainer within sandbox \"1d839197e5e9d170a7cd6ec88106adddf7fa8a17cb70ce724a5d645e988afaec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"71c99aa5103f0a0b20b28b09b85d7e91bd7b24a5e0f310778d6667d76e5b17a5\"" Jul 12 00:09:13.014110 containerd[1748]: time="2025-07-12T00:09:13.013983507Z" level=info msg="StartContainer for \"71c99aa5103f0a0b20b28b09b85d7e91bd7b24a5e0f310778d6667d76e5b17a5\"" Jul 12 00:09:13.045415 systemd[1]: Started cri-containerd-71c99aa5103f0a0b20b28b09b85d7e91bd7b24a5e0f310778d6667d76e5b17a5.scope - libcontainer container 71c99aa5103f0a0b20b28b09b85d7e91bd7b24a5e0f310778d6667d76e5b17a5. Jul 12 00:09:13.080496 containerd[1748]: time="2025-07-12T00:09:13.080315107Z" level=info msg="StartContainer for \"71c99aa5103f0a0b20b28b09b85d7e91bd7b24a5e0f310778d6667d76e5b17a5\" returns successfully" Jul 12 00:09:14.062175 kubelet[3127]: I0712 00:09:14.061985 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7559d57d45-cfb4k" podStartSLOduration=2.171255043 podStartE2EDuration="4.061968122s" podCreationTimestamp="2025-07-12 00:09:10 +0000 UTC" firstStartedPulling="2025-07-12 00:09:11.0301039 +0000 UTC m=+26.200814036" lastFinishedPulling="2025-07-12 00:09:12.920816979 +0000 UTC m=+28.091527115" observedRunningTime="2025-07-12 00:09:14.061677283 +0000 UTC m=+29.232387419" watchObservedRunningTime="2025-07-12 00:09:14.061968122 +0000 UTC m=+29.232678258" Jul 12 00:09:14.079447 kubelet[3127]: E0712 00:09:14.079417 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.079447 kubelet[3127]: W0712 00:09:14.079442 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.079632 kubelet[3127]: E0712 00:09:14.079461 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.079686 kubelet[3127]: E0712 00:09:14.079673 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.079686 kubelet[3127]: W0712 00:09:14.079683 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.079880 kubelet[3127]: E0712 00:09:14.079693 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.080212 kubelet[3127]: E0712 00:09:14.079974 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.080212 kubelet[3127]: W0712 00:09:14.079987 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.080212 kubelet[3127]: E0712 00:09:14.079997 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.080350 kubelet[3127]: E0712 00:09:14.080231 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.080350 kubelet[3127]: W0712 00:09:14.080241 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.080350 kubelet[3127]: E0712 00:09:14.080251 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.080709 kubelet[3127]: E0712 00:09:14.080458 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.080709 kubelet[3127]: W0712 00:09:14.080471 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.080709 kubelet[3127]: E0712 00:09:14.080481 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.080925 kubelet[3127]: E0712 00:09:14.080648 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.080925 kubelet[3127]: W0712 00:09:14.080896 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.080925 kubelet[3127]: E0712 00:09:14.080911 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.082405 kubelet[3127]: E0712 00:09:14.081909 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.082405 kubelet[3127]: W0712 00:09:14.081926 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.082405 kubelet[3127]: E0712 00:09:14.081939 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.082405 kubelet[3127]: E0712 00:09:14.082130 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.082405 kubelet[3127]: W0712 00:09:14.082140 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.082405 kubelet[3127]: E0712 00:09:14.082149 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.083364 kubelet[3127]: E0712 00:09:14.082571 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.083364 kubelet[3127]: W0712 00:09:14.082582 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.083364 kubelet[3127]: E0712 00:09:14.082593 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.083364 kubelet[3127]: E0712 00:09:14.082727 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.083364 kubelet[3127]: W0712 00:09:14.082736 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.083364 kubelet[3127]: E0712 00:09:14.082744 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.083364 kubelet[3127]: E0712 00:09:14.082862 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.083364 kubelet[3127]: W0712 00:09:14.082869 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.083364 kubelet[3127]: E0712 00:09:14.082877 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.083364 kubelet[3127]: E0712 00:09:14.083003 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.083911 kubelet[3127]: W0712 00:09:14.083010 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.083911 kubelet[3127]: E0712 00:09:14.083017 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.083911 kubelet[3127]: E0712 00:09:14.083134 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.083911 kubelet[3127]: W0712 00:09:14.083140 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.083911 kubelet[3127]: E0712 00:09:14.083148 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.083911 kubelet[3127]: E0712 00:09:14.083325 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.083911 kubelet[3127]: W0712 00:09:14.083334 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.083911 kubelet[3127]: E0712 00:09:14.083343 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.083911 kubelet[3127]: E0712 00:09:14.083770 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.083911 kubelet[3127]: W0712 00:09:14.083784 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.084129 kubelet[3127]: E0712 00:09:14.083795 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.095166 kubelet[3127]: E0712 00:09:14.094670 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.095166 kubelet[3127]: W0712 00:09:14.094690 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.095166 kubelet[3127]: E0712 00:09:14.094704 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.095535 kubelet[3127]: E0712 00:09:14.095201 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.095535 kubelet[3127]: W0712 00:09:14.095211 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.095535 kubelet[3127]: E0712 00:09:14.095223 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.095985 kubelet[3127]: E0712 00:09:14.095795 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.095985 kubelet[3127]: W0712 00:09:14.095813 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.095985 kubelet[3127]: E0712 00:09:14.095826 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.096530 kubelet[3127]: E0712 00:09:14.096106 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.096530 kubelet[3127]: W0712 00:09:14.096117 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.096530 kubelet[3127]: E0712 00:09:14.096128 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.096530 kubelet[3127]: E0712 00:09:14.096361 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.096530 kubelet[3127]: W0712 00:09:14.096370 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.096530 kubelet[3127]: E0712 00:09:14.096487 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.096925 kubelet[3127]: E0712 00:09:14.096906 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.096925 kubelet[3127]: W0712 00:09:14.096922 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.097006 kubelet[3127]: E0712 00:09:14.096933 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.097544 kubelet[3127]: E0712 00:09:14.097521 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.097544 kubelet[3127]: W0712 00:09:14.097537 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.097655 kubelet[3127]: E0712 00:09:14.097550 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.097958 kubelet[3127]: E0712 00:09:14.097938 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.097958 kubelet[3127]: W0712 00:09:14.097955 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.098116 kubelet[3127]: E0712 00:09:14.097967 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.098574 kubelet[3127]: E0712 00:09:14.098516 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.098574 kubelet[3127]: W0712 00:09:14.098534 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.098574 kubelet[3127]: E0712 00:09:14.098545 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.098885 kubelet[3127]: E0712 00:09:14.098858 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.098885 kubelet[3127]: W0712 00:09:14.098868 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.098952 kubelet[3127]: E0712 00:09:14.098888 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.099299 kubelet[3127]: E0712 00:09:14.099237 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.099299 kubelet[3127]: W0712 00:09:14.099284 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.099299 kubelet[3127]: E0712 00:09:14.099297 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.099799 kubelet[3127]: E0712 00:09:14.099768 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.099799 kubelet[3127]: W0712 00:09:14.099784 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.099799 kubelet[3127]: E0712 00:09:14.099796 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.100404 kubelet[3127]: E0712 00:09:14.100243 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.100404 kubelet[3127]: W0712 00:09:14.100276 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.100404 kubelet[3127]: E0712 00:09:14.100289 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.100947 kubelet[3127]: E0712 00:09:14.100921 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.100947 kubelet[3127]: W0712 00:09:14.100938 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.101082 kubelet[3127]: E0712 00:09:14.100951 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.101559 kubelet[3127]: E0712 00:09:14.101527 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.101559 kubelet[3127]: W0712 00:09:14.101549 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.101648 kubelet[3127]: E0712 00:09:14.101562 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.101951 kubelet[3127]: E0712 00:09:14.101930 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.101951 kubelet[3127]: W0712 00:09:14.101948 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.102020 kubelet[3127]: E0712 00:09:14.101960 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.102523 kubelet[3127]: E0712 00:09:14.102498 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.102523 kubelet[3127]: W0712 00:09:14.102519 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.102602 kubelet[3127]: E0712 00:09:14.102532 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.103431 kubelet[3127]: E0712 00:09:14.103377 3127 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:14.103500 kubelet[3127]: W0712 00:09:14.103434 3127 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:14.103500 kubelet[3127]: E0712 00:09:14.103448 3127 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:14.179434 containerd[1748]: time="2025-07-12T00:09:14.178747101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.182364 containerd[1748]: time="2025-07-12T00:09:14.182293737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:09:14.187869 containerd[1748]: time="2025-07-12T00:09:14.187825690Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.194929 containerd[1748]: time="2025-07-12T00:09:14.194871642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.195304 containerd[1748]: time="2025-07-12T00:09:14.195237361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.272983744s" Jul 12 00:09:14.195304 containerd[1748]: time="2025-07-12T00:09:14.195282281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:09:14.203162 containerd[1748]: time="2025-07-12T00:09:14.202978072Z" level=info msg="CreateContainer within sandbox \"cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:09:14.258355 containerd[1748]: time="2025-07-12T00:09:14.258240165Z" level=info msg="CreateContainer within sandbox \"cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34\"" Jul 12 00:09:14.259219 containerd[1748]: time="2025-07-12T00:09:14.259116924Z" level=info msg="StartContainer for \"d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34\"" Jul 12 00:09:14.288146 systemd[1]: run-containerd-runc-k8s.io-d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34-runc.r1VQ7z.mount: Deactivated successfully. Jul 12 00:09:14.299499 systemd[1]: Started cri-containerd-d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34.scope - libcontainer container d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34. Jul 12 00:09:14.329750 containerd[1748]: time="2025-07-12T00:09:14.329478479Z" level=info msg="StartContainer for \"d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34\" returns successfully" Jul 12 00:09:14.341006 systemd[1]: cri-containerd-d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34.scope: Deactivated successfully. Jul 12 00:09:14.883534 containerd[1748]: time="2025-07-12T00:09:14.883431251Z" level=info msg="shim disconnected" id=d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34 namespace=k8s.io Jul 12 00:09:14.883534 containerd[1748]: time="2025-07-12T00:09:14.883513411Z" level=warning msg="cleaning up after shim disconnected" id=d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34 namespace=k8s.io Jul 12 00:09:14.883534 containerd[1748]: time="2025-07-12T00:09:14.883523051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:14.925631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d63324f044a60061f5be44279a2794aef541659b0b2a98f9b055bc1fc951dc34-rootfs.mount: Deactivated successfully. Jul 12 00:09:14.931217 kubelet[3127]: E0712 00:09:14.930553 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26mjl" podUID="0465de75-2781-421a-b1c8-807d08b402b9" Jul 12 00:09:15.045908 kubelet[3127]: I0712 00:09:15.045872 3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:15.049292 containerd[1748]: time="2025-07-12T00:09:15.049052171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:09:16.931013 kubelet[3127]: E0712 00:09:16.930171 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-26mjl" podUID="0465de75-2781-421a-b1c8-807d08b402b9" Jul 12 00:09:17.430291 containerd[1748]: time="2025-07-12T00:09:17.429901603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:17.433996 containerd[1748]: time="2025-07-12T00:09:17.433853239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:09:17.437496 containerd[1748]: time="2025-07-12T00:09:17.437445554Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:17.442502 containerd[1748]: time="2025-07-12T00:09:17.442450109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:17.444468 containerd[1748]: time="2025-07-12T00:09:17.444064387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.394973096s" Jul 12 00:09:17.444468 containerd[1748]: time="2025-07-12T00:09:17.444096347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:09:17.457238 containerd[1748]: time="2025-07-12T00:09:17.457213771Z" level=info msg="CreateContainer within sandbox \"cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:09:17.518774 containerd[1748]: time="2025-07-12T00:09:17.518733059Z" level=info msg="CreateContainer within sandbox \"cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f\"" Jul 12 00:09:17.519529 containerd[1748]: time="2025-07-12T00:09:17.519448859Z" level=info msg="StartContainer for \"1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f\"" Jul 12 00:09:17.553506 systemd[1]: Started cri-containerd-1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f.scope - libcontainer container 1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f. Jul 12 00:09:17.581859 containerd[1748]: time="2025-07-12T00:09:17.581810186Z" level=info msg="StartContainer for \"1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f\" returns successfully" Jul 12 00:09:18.746661 containerd[1748]: time="2025-07-12T00:09:18.746605104Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:09:18.749293 systemd[1]: cri-containerd-1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f.scope: Deactivated successfully. Jul 12 00:09:18.767029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f-rootfs.mount: Deactivated successfully. Jul 12 00:09:18.828563 kubelet[3127]: I0712 00:09:18.827760 3127 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:09:19.590899 systemd[1]: Created slice kubepods-burstable-pod664d0b79_40b5_4d9c_8498_5a2e2d35a983.slice - libcontainer container kubepods-burstable-pod664d0b79_40b5_4d9c_8498_5a2e2d35a983.slice. Jul 12 00:09:19.596552 containerd[1748]: time="2025-07-12T00:09:19.596050191Z" level=info msg="shim disconnected" id=1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f namespace=k8s.io Jul 12 00:09:19.596552 containerd[1748]: time="2025-07-12T00:09:19.596150151Z" level=warning msg="cleaning up after shim disconnected" id=1ce17b4359c3403b2a6d49a0e3b401a986beef53ddf35377ce305107e133a21f namespace=k8s.io Jul 12 00:09:19.596552 containerd[1748]: time="2025-07-12T00:09:19.596162111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:19.607146 systemd[1]: Created slice kubepods-besteffort-pod33b79413_179b_4bb3_828a_0fadd7c84383.slice - libcontainer container kubepods-besteffort-pod33b79413_179b_4bb3_828a_0fadd7c84383.slice. Jul 12 00:09:19.621040 systemd[1]: Created slice kubepods-besteffort-podf86b7bbb_ff83_4e2c_ba5d_1ba824643d5e.slice - libcontainer container kubepods-besteffort-podf86b7bbb_ff83_4e2c_ba5d_1ba824643d5e.slice. Jul 12 00:09:19.633924 kubelet[3127]: I0712 00:09:19.632769 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6872\" (UniqueName: \"kubernetes.io/projected/33b79413-179b-4bb3-828a-0fadd7c84383-kube-api-access-t6872\") pod \"calico-kube-controllers-64fd8cd9c6-dfxmq\" (UID: \"33b79413-179b-4bb3-828a-0fadd7c84383\") " pod="calico-system/calico-kube-controllers-64fd8cd9c6-dfxmq" Jul 12 00:09:19.633924 kubelet[3127]: I0712 00:09:19.632815 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c55f8f7-b85b-4e3f-974e-90f3a29f93c9-config-volume\") pod \"coredns-674b8bbfcf-jvx6w\" (UID: \"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9\") " pod="kube-system/coredns-674b8bbfcf-jvx6w" Jul 12 00:09:19.633924 kubelet[3127]: I0712 00:09:19.632832 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-calico-apiserver-certs\") pod \"calico-apiserver-7fc958d55f-s85hw\" (UID: \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\") " pod="calico-apiserver/calico-apiserver-7fc958d55f-s85hw" Jul 12 00:09:19.633924 kubelet[3127]: I0712 00:09:19.632849 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/664d0b79-40b5-4d9c-8498-5a2e2d35a983-config-volume\") pod \"coredns-674b8bbfcf-6lhhr\" (UID: \"664d0b79-40b5-4d9c-8498-5a2e2d35a983\") " pod="kube-system/coredns-674b8bbfcf-6lhhr" Jul 12 00:09:19.633924 kubelet[3127]: I0712 00:09:19.632866 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-ca-bundle\") pod \"whisker-758f9f458-7vxfp\" (UID: \"71cc5b23-9cc4-431b-8900-01dd686edb1f\") " pod="calico-system/whisker-758f9f458-7vxfp" Jul 12 00:09:19.634146 kubelet[3127]: I0712 00:09:19.632882 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmtfm\" (UniqueName: \"kubernetes.io/projected/664d0b79-40b5-4d9c-8498-5a2e2d35a983-kube-api-access-nmtfm\") pod \"coredns-674b8bbfcf-6lhhr\" (UID: \"664d0b79-40b5-4d9c-8498-5a2e2d35a983\") " pod="kube-system/coredns-674b8bbfcf-6lhhr" Jul 12 00:09:19.634146 kubelet[3127]: I0712 00:09:19.632897 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33b79413-179b-4bb3-828a-0fadd7c84383-tigera-ca-bundle\") pod \"calico-kube-controllers-64fd8cd9c6-dfxmq\" (UID: \"33b79413-179b-4bb3-828a-0fadd7c84383\") " pod="calico-system/calico-kube-controllers-64fd8cd9c6-dfxmq" Jul 12 00:09:19.634146 kubelet[3127]: I0712 00:09:19.632914 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-backend-key-pair\") pod \"whisker-758f9f458-7vxfp\" (UID: \"71cc5b23-9cc4-431b-8900-01dd686edb1f\") " pod="calico-system/whisker-758f9f458-7vxfp" Jul 12 00:09:19.634146 kubelet[3127]: I0712 00:09:19.632932 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnxx\" (UniqueName: \"kubernetes.io/projected/71cc5b23-9cc4-431b-8900-01dd686edb1f-kube-api-access-tqnxx\") pod \"whisker-758f9f458-7vxfp\" (UID: \"71cc5b23-9cc4-431b-8900-01dd686edb1f\") " pod="calico-system/whisker-758f9f458-7vxfp" Jul 12 00:09:19.634146 kubelet[3127]: I0712 00:09:19.632989 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64stz\" (UniqueName: \"kubernetes.io/projected/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-kube-api-access-64stz\") pod \"calico-apiserver-7fc958d55f-s85hw\" (UID: \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\") " pod="calico-apiserver/calico-apiserver-7fc958d55f-s85hw" Jul 12 00:09:19.634303 kubelet[3127]: I0712 00:09:19.633005 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879jz\" (UniqueName: \"kubernetes.io/projected/2c55f8f7-b85b-4e3f-974e-90f3a29f93c9-kube-api-access-879jz\") pod \"coredns-674b8bbfcf-jvx6w\" (UID: \"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9\") " pod="kube-system/coredns-674b8bbfcf-jvx6w" Jul 12 00:09:19.634575 systemd[1]: Created slice kubepods-burstable-pod2c55f8f7_b85b_4e3f_974e_90f3a29f93c9.slice - libcontainer container kubepods-burstable-pod2c55f8f7_b85b_4e3f_974e_90f3a29f93c9.slice. Jul 12 00:09:19.643398 systemd[1]: Created slice kubepods-besteffort-pod71cc5b23_9cc4_431b_8900_01dd686edb1f.slice - libcontainer container kubepods-besteffort-pod71cc5b23_9cc4_431b_8900_01dd686edb1f.slice. Jul 12 00:09:19.654342 systemd[1]: Created slice kubepods-besteffort-podc64c54d7_e2ea_42e7_9d83_27f006d7ff1f.slice - libcontainer container kubepods-besteffort-podc64c54d7_e2ea_42e7_9d83_27f006d7ff1f.slice. Jul 12 00:09:19.660463 systemd[1]: Created slice kubepods-besteffort-pod9c5661c3_a6ae_4537_a0fb_159b28c4d8b2.slice - libcontainer container kubepods-besteffort-pod9c5661c3_a6ae_4537_a0fb_159b28c4d8b2.slice. Jul 12 00:09:19.665825 systemd[1]: Created slice kubepods-besteffort-pod0465de75_2781_421a_b1c8_807d08b402b9.slice - libcontainer container kubepods-besteffort-pod0465de75_2781_421a_b1c8_807d08b402b9.slice. Jul 12 00:09:19.669008 containerd[1748]: time="2025-07-12T00:09:19.668968985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26mjl,Uid:0465de75-2781-421a-b1c8-807d08b402b9,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:19.671844 systemd[1]: Created slice kubepods-besteffort-podbd421362_01fe_4241_bb04_72d4085cf927.slice - libcontainer container kubepods-besteffort-podbd421362_01fe_4241_bb04_72d4085cf927.slice. Jul 12 00:09:19.733564 kubelet[3127]: I0712 00:09:19.733521 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c64c54d7-e2ea-42e7-9d83-27f006d7ff1f-calico-apiserver-certs\") pod \"calico-apiserver-79f66ccc75-c2rbd\" (UID: \"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f\") " pod="calico-apiserver/calico-apiserver-79f66ccc75-c2rbd" Jul 12 00:09:19.736056 kubelet[3127]: I0712 00:09:19.734086 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-calico-apiserver-certs\") pod \"calico-apiserver-7fc958d55f-sgjtg\" (UID: \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\") " pod="calico-apiserver/calico-apiserver-7fc958d55f-sgjtg" Jul 12 00:09:19.736056 kubelet[3127]: I0712 00:09:19.734242 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl2gc\" (UniqueName: \"kubernetes.io/projected/c64c54d7-e2ea-42e7-9d83-27f006d7ff1f-kube-api-access-nl2gc\") pod \"calico-apiserver-79f66ccc75-c2rbd\" (UID: \"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f\") " pod="calico-apiserver/calico-apiserver-79f66ccc75-c2rbd" Jul 12 00:09:19.736056 kubelet[3127]: I0712 00:09:19.734293 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncl8w\" (UniqueName: \"kubernetes.io/projected/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-kube-api-access-ncl8w\") pod \"calico-apiserver-7fc958d55f-sgjtg\" (UID: \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\") " pod="calico-apiserver/calico-apiserver-7fc958d55f-sgjtg" Jul 12 00:09:19.736056 kubelet[3127]: I0712 00:09:19.734311 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd421362-01fe-4241-bb04-72d4085cf927-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-4sphm\" (UID: \"bd421362-01fe-4241-bb04-72d4085cf927\") " pod="calico-system/goldmane-768f4c5c69-4sphm" Jul 12 00:09:19.736056 kubelet[3127]: I0712 00:09:19.734326 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bd421362-01fe-4241-bb04-72d4085cf927-goldmane-key-pair\") pod \"goldmane-768f4c5c69-4sphm\" (UID: \"bd421362-01fe-4241-bb04-72d4085cf927\") " pod="calico-system/goldmane-768f4c5c69-4sphm" Jul 12 00:09:19.736247 kubelet[3127]: I0712 00:09:19.734368 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd421362-01fe-4241-bb04-72d4085cf927-config\") pod \"goldmane-768f4c5c69-4sphm\" (UID: \"bd421362-01fe-4241-bb04-72d4085cf927\") " pod="calico-system/goldmane-768f4c5c69-4sphm" Jul 12 00:09:19.736247 kubelet[3127]: I0712 00:09:19.734389 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spf5t\" (UniqueName: \"kubernetes.io/projected/bd421362-01fe-4241-bb04-72d4085cf927-kube-api-access-spf5t\") pod \"goldmane-768f4c5c69-4sphm\" (UID: \"bd421362-01fe-4241-bb04-72d4085cf927\") " pod="calico-system/goldmane-768f4c5c69-4sphm" Jul 12 00:09:19.826601 containerd[1748]: time="2025-07-12T00:09:19.826550881Z" level=error msg="Failed to destroy network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:19.826951 containerd[1748]: time="2025-07-12T00:09:19.826877361Z" level=error msg="encountered an error cleaning up failed sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:19.826951 containerd[1748]: time="2025-07-12T00:09:19.826924401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26mjl,Uid:0465de75-2781-421a-b1c8-807d08b402b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:19.829017 kubelet[3127]: E0712 00:09:19.827165 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:19.829017 kubelet[3127]: E0712 00:09:19.827234 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26mjl" Jul 12 00:09:19.829017 kubelet[3127]: E0712 00:09:19.827268 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-26mjl" Jul 12 00:09:19.829323 kubelet[3127]: E0712 00:09:19.827324 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-26mjl_calico-system(0465de75-2781-421a-b1c8-807d08b402b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-26mjl_calico-system(0465de75-2781-421a-b1c8-807d08b402b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26mjl" podUID="0465de75-2781-421a-b1c8-807d08b402b9" Jul 12 00:09:19.894763 containerd[1748]: time="2025-07-12T00:09:19.894293002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lhhr,Uid:664d0b79-40b5-4d9c-8498-5a2e2d35a983,Namespace:kube-system,Attempt:0,}" Jul 12 00:09:19.920883 containerd[1748]: time="2025-07-12T00:09:19.920833691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd8cd9c6-dfxmq,Uid:33b79413-179b-4bb3-828a-0fadd7c84383,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:19.936295 containerd[1748]: time="2025-07-12T00:09:19.936221313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-s85hw,Uid:f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:09:19.942313 containerd[1748]: time="2025-07-12T00:09:19.942241746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvx6w,Uid:2c55f8f7-b85b-4e3f-974e-90f3a29f93c9,Namespace:kube-system,Attempt:0,}" Jul 12 00:09:19.953280 containerd[1748]: time="2025-07-12T00:09:19.951420935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-758f9f458-7vxfp,Uid:71cc5b23-9cc4-431b-8900-01dd686edb1f,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:19.957930 containerd[1748]: time="2025-07-12T00:09:19.957900608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f66ccc75-c2rbd,Uid:c64c54d7-e2ea-42e7-9d83-27f006d7ff1f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:09:19.971509 containerd[1748]: time="2025-07-12T00:09:19.971475512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-sgjtg,Uid:9c5661c3-a6ae-4537-a0fb-159b28c4d8b2,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:09:19.974755 containerd[1748]: time="2025-07-12T00:09:19.974717188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4sphm,Uid:bd421362-01fe-4241-bb04-72d4085cf927,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:20.065796 containerd[1748]: time="2025-07-12T00:09:20.065736482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:09:20.066415 kubelet[3127]: I0712 00:09:20.066384 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:20.067200 containerd[1748]: time="2025-07-12T00:09:20.067145520Z" level=info msg="StopPodSandbox for \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\"" Jul 12 00:09:20.067627 containerd[1748]: time="2025-07-12T00:09:20.067424080Z" level=info msg="Ensure that sandbox 5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af in task-service has been cleanup successfully" Jul 12 00:09:20.105532 containerd[1748]: time="2025-07-12T00:09:20.105267755Z" level=error msg="StopPodSandbox for \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\" failed" error="failed to destroy network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.105780 kubelet[3127]: E0712 00:09:20.105467 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:20.105928 kubelet[3127]: E0712 00:09:20.105523 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af"} Jul 12 00:09:20.106073 kubelet[3127]: E0712 00:09:20.105946 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0465de75-2781-421a-b1c8-807d08b402b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:20.106073 kubelet[3127]: E0712 00:09:20.105970 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0465de75-2781-421a-b1c8-807d08b402b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-26mjl" podUID="0465de75-2781-421a-b1c8-807d08b402b9" Jul 12 00:09:20.121579 containerd[1748]: time="2025-07-12T00:09:20.121533736Z" level=error msg="Failed to destroy network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.121893 containerd[1748]: time="2025-07-12T00:09:20.121860936Z" level=error msg="encountered an error cleaning up failed sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.121976 containerd[1748]: time="2025-07-12T00:09:20.121918096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lhhr,Uid:664d0b79-40b5-4d9c-8498-5a2e2d35a983,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.122247 kubelet[3127]: E0712 00:09:20.122206 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.122425 kubelet[3127]: E0712 00:09:20.122314 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6lhhr" Jul 12 00:09:20.122425 kubelet[3127]: E0712 00:09:20.122338 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6lhhr" Jul 12 00:09:20.122864 kubelet[3127]: E0712 00:09:20.122549 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6lhhr_kube-system(664d0b79-40b5-4d9c-8498-5a2e2d35a983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6lhhr_kube-system(664d0b79-40b5-4d9c-8498-5a2e2d35a983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6lhhr" podUID="664d0b79-40b5-4d9c-8498-5a2e2d35a983" Jul 12 00:09:20.259120 containerd[1748]: time="2025-07-12T00:09:20.259023136Z" level=error msg="Failed to destroy network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.259620 containerd[1748]: time="2025-07-12T00:09:20.259479855Z" level=error msg="encountered an error cleaning up failed sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.259620 containerd[1748]: time="2025-07-12T00:09:20.259529175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd8cd9c6-dfxmq,Uid:33b79413-179b-4bb3-828a-0fadd7c84383,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.259878 kubelet[3127]: E0712 00:09:20.259736 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.259878 kubelet[3127]: E0712 00:09:20.259790 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64fd8cd9c6-dfxmq" Jul 12 00:09:20.259878 kubelet[3127]: E0712 00:09:20.259810 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64fd8cd9c6-dfxmq" Jul 12 00:09:20.260042 kubelet[3127]: E0712 00:09:20.259866 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64fd8cd9c6-dfxmq_calico-system(33b79413-179b-4bb3-828a-0fadd7c84383)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64fd8cd9c6-dfxmq_calico-system(33b79413-179b-4bb3-828a-0fadd7c84383)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64fd8cd9c6-dfxmq" podUID="33b79413-179b-4bb3-828a-0fadd7c84383" Jul 12 00:09:20.327835 containerd[1748]: time="2025-07-12T00:09:20.327720575Z" level=error msg="Failed to destroy network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.328658 containerd[1748]: time="2025-07-12T00:09:20.328527174Z" level=error msg="encountered an error cleaning up failed sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.328985 containerd[1748]: time="2025-07-12T00:09:20.328860734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-s85hw,Uid:f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.330083 kubelet[3127]: E0712 00:09:20.329711 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.330083 kubelet[3127]: E0712 00:09:20.329776 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc958d55f-s85hw" Jul 12 00:09:20.330083 kubelet[3127]: E0712 00:09:20.329803 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc958d55f-s85hw" Jul 12 00:09:20.330229 kubelet[3127]: E0712 00:09:20.329853 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fc958d55f-s85hw_calico-apiserver(f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fc958d55f-s85hw_calico-apiserver(f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fc958d55f-s85hw" podUID="f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e" Jul 12 00:09:20.412984 containerd[1748]: time="2025-07-12T00:09:20.412937476Z" level=error msg="Failed to destroy network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.414652 containerd[1748]: time="2025-07-12T00:09:20.414500754Z" level=error msg="encountered an error cleaning up failed sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.414652 containerd[1748]: time="2025-07-12T00:09:20.414556154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-758f9f458-7vxfp,Uid:71cc5b23-9cc4-431b-8900-01dd686edb1f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.415337 kubelet[3127]: E0712 00:09:20.414885 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.415337 kubelet[3127]: E0712 00:09:20.414943 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-758f9f458-7vxfp" Jul 12 00:09:20.415337 kubelet[3127]: E0712 00:09:20.414962 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-758f9f458-7vxfp" Jul 12 00:09:20.415473 kubelet[3127]: E0712 00:09:20.415007 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-758f9f458-7vxfp_calico-system(71cc5b23-9cc4-431b-8900-01dd686edb1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-758f9f458-7vxfp_calico-system(71cc5b23-9cc4-431b-8900-01dd686edb1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-758f9f458-7vxfp" podUID="71cc5b23-9cc4-431b-8900-01dd686edb1f" Jul 12 00:09:20.420262 containerd[1748]: time="2025-07-12T00:09:20.420146267Z" level=error msg="Failed to destroy network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.420663 containerd[1748]: time="2025-07-12T00:09:20.420595227Z" level=error msg="encountered an error cleaning up failed sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.420819 containerd[1748]: time="2025-07-12T00:09:20.420744307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvx6w,Uid:2c55f8f7-b85b-4e3f-974e-90f3a29f93c9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.421192 kubelet[3127]: E0712 00:09:20.421021 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.421192 kubelet[3127]: E0712 00:09:20.421077 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvx6w" Jul 12 00:09:20.421192 kubelet[3127]: E0712 00:09:20.421096 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvx6w" Jul 12 00:09:20.421343 kubelet[3127]: E0712 00:09:20.421137 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jvx6w_kube-system(2c55f8f7-b85b-4e3f-974e-90f3a29f93c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jvx6w_kube-system(2c55f8f7-b85b-4e3f-974e-90f3a29f93c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jvx6w" podUID="2c55f8f7-b85b-4e3f-974e-90f3a29f93c9" Jul 12 00:09:20.427937 containerd[1748]: time="2025-07-12T00:09:20.427876458Z" level=error msg="Failed to destroy network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.428364 containerd[1748]: time="2025-07-12T00:09:20.428328578Z" level=error msg="encountered an error cleaning up failed sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.428495 containerd[1748]: time="2025-07-12T00:09:20.428470617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-sgjtg,Uid:9c5661c3-a6ae-4537-a0fb-159b28c4d8b2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.429170 kubelet[3127]: E0712 00:09:20.428719 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.429170 kubelet[3127]: E0712 00:09:20.429026 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc958d55f-sgjtg" Jul 12 00:09:20.429170 kubelet[3127]: E0712 00:09:20.429051 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fc958d55f-sgjtg" Jul 12 00:09:20.429484 kubelet[3127]: E0712 00:09:20.429324 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fc958d55f-sgjtg_calico-apiserver(9c5661c3-a6ae-4537-a0fb-159b28c4d8b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fc958d55f-sgjtg_calico-apiserver(9c5661c3-a6ae-4537-a0fb-159b28c4d8b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fc958d55f-sgjtg" podUID="9c5661c3-a6ae-4537-a0fb-159b28c4d8b2" Jul 12 00:09:20.439687 containerd[1748]: time="2025-07-12T00:09:20.439598164Z" level=error msg="Failed to destroy network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.439916 containerd[1748]: time="2025-07-12T00:09:20.439883924Z" level=error msg="encountered an error cleaning up failed sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.439964 containerd[1748]: time="2025-07-12T00:09:20.439934524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4sphm,Uid:bd421362-01fe-4241-bb04-72d4085cf927,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.440133 kubelet[3127]: E0712 00:09:20.440093 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.440210 kubelet[3127]: E0712 00:09:20.440141 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4sphm" Jul 12 00:09:20.440210 kubelet[3127]: E0712 00:09:20.440160 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4sphm" Jul 12 00:09:20.440317 kubelet[3127]: E0712 00:09:20.440203 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-4sphm_calico-system(bd421362-01fe-4241-bb04-72d4085cf927)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-4sphm_calico-system(bd421362-01fe-4241-bb04-72d4085cf927)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-4sphm" podUID="bd421362-01fe-4241-bb04-72d4085cf927" Jul 12 00:09:20.443999 containerd[1748]: time="2025-07-12T00:09:20.443506120Z" level=error msg="Failed to destroy network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.443999 containerd[1748]: time="2025-07-12T00:09:20.443829080Z" level=error msg="encountered an error cleaning up failed sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.443999 containerd[1748]: time="2025-07-12T00:09:20.443889359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f66ccc75-c2rbd,Uid:c64c54d7-e2ea-42e7-9d83-27f006d7ff1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.444451 kubelet[3127]: E0712 00:09:20.444376 3127 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:20.444451 kubelet[3127]: E0712 00:09:20.444415 3127 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f66ccc75-c2rbd" Jul 12 00:09:20.444659 kubelet[3127]: E0712 00:09:20.444542 3127 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f66ccc75-c2rbd" Jul 12 00:09:20.444740 kubelet[3127]: E0712 00:09:20.444588 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f66ccc75-c2rbd_calico-apiserver(c64c54d7-e2ea-42e7-9d83-27f006d7ff1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f66ccc75-c2rbd_calico-apiserver(c64c54d7-e2ea-42e7-9d83-27f006d7ff1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f66ccc75-c2rbd" podUID="c64c54d7-e2ea-42e7-9d83-27f006d7ff1f" Jul 12 00:09:20.778439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af-shm.mount: Deactivated successfully. Jul 12 00:09:21.069397 kubelet[3127]: I0712 00:09:21.069098 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:21.071164 containerd[1748]: time="2025-07-12T00:09:21.070741867Z" level=info msg="StopPodSandbox for \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\"" Jul 12 00:09:21.071928 containerd[1748]: time="2025-07-12T00:09:21.071233906Z" level=info msg="Ensure that sandbox 00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417 in task-service has been cleanup successfully" Jul 12 00:09:21.071969 kubelet[3127]: I0712 00:09:21.071409 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:21.072055 containerd[1748]: time="2025-07-12T00:09:21.072012985Z" level=info msg="StopPodSandbox for \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\"" Jul 12 00:09:21.072185 containerd[1748]: time="2025-07-12T00:09:21.072158745Z" level=info msg="Ensure that sandbox c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2 in task-service has been cleanup successfully" Jul 12 00:09:21.074091 kubelet[3127]: I0712 00:09:21.074066 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:21.074616 containerd[1748]: time="2025-07-12T00:09:21.074528182Z" level=info msg="StopPodSandbox for \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\"" Jul 12 00:09:21.074780 containerd[1748]: time="2025-07-12T00:09:21.074660422Z" level=info msg="Ensure that sandbox 9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2 in task-service has been cleanup successfully" Jul 12 00:09:21.077010 kubelet[3127]: I0712 00:09:21.076683 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:21.078200 containerd[1748]: time="2025-07-12T00:09:21.078091378Z" level=info msg="StopPodSandbox for \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\"" Jul 12 00:09:21.078646 containerd[1748]: time="2025-07-12T00:09:21.078608497Z" level=info msg="Ensure that sandbox cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace in task-service has been cleanup successfully" Jul 12 00:09:21.082957 kubelet[3127]: I0712 00:09:21.082316 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:21.084047 containerd[1748]: time="2025-07-12T00:09:21.084012891Z" level=info msg="StopPodSandbox for \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\"" Jul 12 00:09:21.084323 containerd[1748]: time="2025-07-12T00:09:21.084144051Z" level=info msg="Ensure that sandbox a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2 in task-service has been cleanup successfully" Jul 12 00:09:21.085402 kubelet[3127]: I0712 00:09:21.085370 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:21.085935 containerd[1748]: time="2025-07-12T00:09:21.085847449Z" level=info msg="StopPodSandbox for \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\"" Jul 12 00:09:21.086648 containerd[1748]: time="2025-07-12T00:09:21.085979449Z" level=info msg="Ensure that sandbox d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485 in task-service has been cleanup successfully" Jul 12 00:09:21.093389 kubelet[3127]: I0712 00:09:21.093366 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:21.094768 containerd[1748]: time="2025-07-12T00:09:21.094552719Z" level=info msg="StopPodSandbox for \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\"" Jul 12 00:09:21.098996 containerd[1748]: time="2025-07-12T00:09:21.098590754Z" level=info msg="Ensure that sandbox c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045 in task-service has been cleanup successfully" Jul 12 00:09:21.099567 kubelet[3127]: I0712 00:09:21.099483 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:21.104082 containerd[1748]: time="2025-07-12T00:09:21.104048308Z" level=info msg="StopPodSandbox for \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\"" Jul 12 00:09:21.104322 containerd[1748]: time="2025-07-12T00:09:21.104202947Z" level=info msg="Ensure that sandbox d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5 in task-service has been cleanup successfully" Jul 12 00:09:21.178466 containerd[1748]: time="2025-07-12T00:09:21.178404141Z" level=error msg="StopPodSandbox for \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\" failed" error="failed to destroy network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.178974 kubelet[3127]: E0712 00:09:21.178813 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:21.178974 kubelet[3127]: E0712 00:09:21.178863 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2"} Jul 12 00:09:21.178974 kubelet[3127]: E0712 00:09:21.178901 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71cc5b23-9cc4-431b-8900-01dd686edb1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.178974 kubelet[3127]: E0712 00:09:21.178923 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71cc5b23-9cc4-431b-8900-01dd686edb1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-758f9f458-7vxfp" podUID="71cc5b23-9cc4-431b-8900-01dd686edb1f" Jul 12 00:09:21.184772 containerd[1748]: time="2025-07-12T00:09:21.184580453Z" level=error msg="StopPodSandbox for \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\" failed" error="failed to destroy network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.184962 kubelet[3127]: E0712 00:09:21.184786 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:21.184962 kubelet[3127]: E0712 00:09:21.184828 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417"} Jul 12 00:09:21.184962 kubelet[3127]: E0712 00:09:21.184861 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.184962 kubelet[3127]: E0712 00:09:21.184881 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fc958d55f-sgjtg" podUID="9c5661c3-a6ae-4537-a0fb-159b28c4d8b2" Jul 12 00:09:21.190120 containerd[1748]: time="2025-07-12T00:09:21.190081127Z" level=error msg="StopPodSandbox for \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\" failed" error="failed to destroy network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.194711 kubelet[3127]: E0712 00:09:21.194571 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:21.194711 kubelet[3127]: E0712 00:09:21.194630 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace"} Jul 12 00:09:21.194711 kubelet[3127]: E0712 00:09:21.194656 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.194711 kubelet[3127]: E0712 00:09:21.194681 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fc958d55f-s85hw" podUID="f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e" Jul 12 00:09:21.196301 containerd[1748]: time="2025-07-12T00:09:21.195245521Z" level=error msg="StopPodSandbox for \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\" failed" error="failed to destroy network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.196367 kubelet[3127]: E0712 00:09:21.195428 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:21.196367 kubelet[3127]: E0712 00:09:21.195456 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2"} Jul 12 00:09:21.196367 kubelet[3127]: E0712 00:09:21.195482 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"664d0b79-40b5-4d9c-8498-5a2e2d35a983\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.196367 kubelet[3127]: E0712 00:09:21.195500 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"664d0b79-40b5-4d9c-8498-5a2e2d35a983\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6lhhr" podUID="664d0b79-40b5-4d9c-8498-5a2e2d35a983" Jul 12 00:09:21.201673 containerd[1748]: time="2025-07-12T00:09:21.201639194Z" level=error msg="StopPodSandbox for \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\" failed" error="failed to destroy network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.202027 containerd[1748]: time="2025-07-12T00:09:21.201978873Z" level=error msg="StopPodSandbox for \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\" failed" error="failed to destroy network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.202172 kubelet[3127]: E0712 00:09:21.202122 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:21.202172 kubelet[3127]: E0712 00:09:21.202157 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485"} Jul 12 00:09:21.202439 kubelet[3127]: E0712 00:09:21.202183 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.202439 kubelet[3127]: E0712 00:09:21.202202 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jvx6w" podUID="2c55f8f7-b85b-4e3f-974e-90f3a29f93c9" Jul 12 00:09:21.202439 kubelet[3127]: E0712 00:09:21.202123 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:21.202439 kubelet[3127]: E0712 00:09:21.202238 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045"} Jul 12 00:09:21.202585 kubelet[3127]: E0712 00:09:21.202302 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.202585 kubelet[3127]: E0712 00:09:21.202325 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f66ccc75-c2rbd" podUID="c64c54d7-e2ea-42e7-9d83-27f006d7ff1f" Jul 12 00:09:21.204242 containerd[1748]: time="2025-07-12T00:09:21.204203191Z" level=error msg="StopPodSandbox for \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\" failed" error="failed to destroy network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.204446 kubelet[3127]: E0712 00:09:21.204423 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:21.204512 kubelet[3127]: E0712 00:09:21.204452 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2"} Jul 12 00:09:21.204512 kubelet[3127]: E0712 00:09:21.204502 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd421362-01fe-4241-bb04-72d4085cf927\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.204777 kubelet[3127]: E0712 00:09:21.204747 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd421362-01fe-4241-bb04-72d4085cf927\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-4sphm" podUID="bd421362-01fe-4241-bb04-72d4085cf927" Jul 12 00:09:21.209555 containerd[1748]: time="2025-07-12T00:09:21.209509984Z" level=error msg="StopPodSandbox for \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\" failed" error="failed to destroy network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:21.209726 kubelet[3127]: E0712 00:09:21.209692 3127 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:21.209785 kubelet[3127]: E0712 00:09:21.209732 3127 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5"} Jul 12 00:09:21.209785 kubelet[3127]: E0712 00:09:21.209760 3127 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33b79413-179b-4bb3-828a-0fadd7c84383\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:21.209867 kubelet[3127]: E0712 00:09:21.209781 3127 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33b79413-179b-4bb3-828a-0fadd7c84383\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64fd8cd9c6-dfxmq" podUID="33b79413-179b-4bb3-828a-0fadd7c84383" Jul 12 00:09:24.497559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138550438.mount: Deactivated successfully. Jul 12 00:09:25.424545 containerd[1748]: time="2025-07-12T00:09:25.424487732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:25.428675 containerd[1748]: time="2025-07-12T00:09:25.428515728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:09:25.436230 containerd[1748]: time="2025-07-12T00:09:25.436088840Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:25.446296 containerd[1748]: time="2025-07-12T00:09:25.445869069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:25.446970 containerd[1748]: time="2025-07-12T00:09:25.446941788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 5.381159306s" Jul 12 00:09:25.447097 containerd[1748]: time="2025-07-12T00:09:25.447078668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:09:25.517942 containerd[1748]: time="2025-07-12T00:09:25.517904752Z" level=info msg="CreateContainer within sandbox \"cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:09:26.086850 containerd[1748]: time="2025-07-12T00:09:26.086800338Z" level=info msg="CreateContainer within sandbox \"cb63876d2c9c92ebdce12aad5ed9a1ef1960046e2c68ef13fb9b1952bac2813a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a1c4b35a14977c444a775b45e1dd5ff4e84168abe122e0f17b4dd6db60476dd9\"" Jul 12 00:09:26.087639 containerd[1748]: time="2025-07-12T00:09:26.087607737Z" level=info msg="StartContainer for \"a1c4b35a14977c444a775b45e1dd5ff4e84168abe122e0f17b4dd6db60476dd9\"" Jul 12 00:09:26.117419 systemd[1]: Started cri-containerd-a1c4b35a14977c444a775b45e1dd5ff4e84168abe122e0f17b4dd6db60476dd9.scope - libcontainer container a1c4b35a14977c444a775b45e1dd5ff4e84168abe122e0f17b4dd6db60476dd9. Jul 12 00:09:26.148633 containerd[1748]: time="2025-07-12T00:09:26.148583832Z" level=info msg="StartContainer for \"a1c4b35a14977c444a775b45e1dd5ff4e84168abe122e0f17b4dd6db60476dd9\" returns successfully" Jul 12 00:09:26.434276 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:09:26.434397 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:09:26.563619 containerd[1748]: time="2025-07-12T00:09:26.563428904Z" level=info msg="StopPodSandbox for \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\"" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.687 [INFO][4441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.688 [INFO][4441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" iface="eth0" netns="/var/run/netns/cni-0ae772a8-e37c-99b3-6efd-4b3767486165" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.689 [INFO][4441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" iface="eth0" netns="/var/run/netns/cni-0ae772a8-e37c-99b3-6efd-4b3767486165" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.691 [INFO][4441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" iface="eth0" netns="/var/run/netns/cni-0ae772a8-e37c-99b3-6efd-4b3767486165" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.691 [INFO][4441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.691 [INFO][4441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.711 [INFO][4450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.712 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.712 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.720 [WARNING][4450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.720 [INFO][4450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.721 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:26.724844 containerd[1748]: 2025-07-12 00:09:26.723 [INFO][4441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:26.725389 containerd[1748]: time="2025-07-12T00:09:26.725352970Z" level=info msg="TearDown network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\" successfully" Jul 12 00:09:26.725431 containerd[1748]: time="2025-07-12T00:09:26.725389610Z" level=info msg="StopPodSandbox for \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\" returns successfully" Jul 12 00:09:26.730450 systemd[1]: run-netns-cni\x2d0ae772a8\x2de37c\x2d99b3\x2d6efd\x2d4b3767486165.mount: Deactivated successfully. Jul 12 00:09:26.786281 kubelet[3127]: I0712 00:09:26.786217 3127 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqnxx\" (UniqueName: \"kubernetes.io/projected/71cc5b23-9cc4-431b-8900-01dd686edb1f-kube-api-access-tqnxx\") pod \"71cc5b23-9cc4-431b-8900-01dd686edb1f\" (UID: \"71cc5b23-9cc4-431b-8900-01dd686edb1f\") " Jul 12 00:09:26.786281 kubelet[3127]: I0712 00:09:26.786292 3127 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-ca-bundle\") pod \"71cc5b23-9cc4-431b-8900-01dd686edb1f\" (UID: \"71cc5b23-9cc4-431b-8900-01dd686edb1f\") " Jul 12 00:09:26.786683 kubelet[3127]: I0712 00:09:26.786324 3127 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-backend-key-pair\") pod \"71cc5b23-9cc4-431b-8900-01dd686edb1f\" (UID: \"71cc5b23-9cc4-431b-8900-01dd686edb1f\") " Jul 12 00:09:26.790638 kubelet[3127]: I0712 00:09:26.790562 3127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "71cc5b23-9cc4-431b-8900-01dd686edb1f" (UID: "71cc5b23-9cc4-431b-8900-01dd686edb1f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:09:26.803423 kubelet[3127]: I0712 00:09:26.801722 3127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71cc5b23-9cc4-431b-8900-01dd686edb1f-kube-api-access-tqnxx" (OuterVolumeSpecName: "kube-api-access-tqnxx") pod "71cc5b23-9cc4-431b-8900-01dd686edb1f" (UID: "71cc5b23-9cc4-431b-8900-01dd686edb1f"). InnerVolumeSpecName "kube-api-access-tqnxx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:09:26.801822 systemd[1]: var-lib-kubelet-pods-71cc5b23\x2d9cc4\x2d431b\x2d8900\x2d01dd686edb1f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtqnxx.mount: Deactivated successfully. Jul 12 00:09:26.804895 kubelet[3127]: I0712 00:09:26.804747 3127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "71cc5b23-9cc4-431b-8900-01dd686edb1f" (UID: "71cc5b23-9cc4-431b-8900-01dd686edb1f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:09:26.887562 kubelet[3127]: I0712 00:09:26.887431 3127 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-ca-bundle\") on node \"ci-4081.3.4-n-047a586f92\" DevicePath \"\"" Jul 12 00:09:26.887562 kubelet[3127]: I0712 00:09:26.887471 3127 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71cc5b23-9cc4-431b-8900-01dd686edb1f-whisker-backend-key-pair\") on node \"ci-4081.3.4-n-047a586f92\" DevicePath \"\"" Jul 12 00:09:26.887562 kubelet[3127]: I0712 00:09:26.887484 3127 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tqnxx\" (UniqueName: \"kubernetes.io/projected/71cc5b23-9cc4-431b-8900-01dd686edb1f-kube-api-access-tqnxx\") on node \"ci-4081.3.4-n-047a586f92\" DevicePath \"\"" Jul 12 00:09:26.938336 systemd[1]: Removed slice kubepods-besteffort-pod71cc5b23_9cc4_431b_8900_01dd686edb1f.slice - libcontainer container kubepods-besteffort-pod71cc5b23_9cc4_431b_8900_01dd686edb1f.slice. Jul 12 00:09:27.065386 systemd[1]: var-lib-kubelet-pods-71cc5b23\x2d9cc4\x2d431b\x2d8900\x2d01dd686edb1f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:09:27.156952 kubelet[3127]: I0712 00:09:27.156814 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vrbfc" podStartSLOduration=2.899183105 podStartE2EDuration="17.156800425s" podCreationTimestamp="2025-07-12 00:09:10 +0000 UTC" firstStartedPulling="2025-07-12 00:09:11.190799347 +0000 UTC m=+26.361509483" lastFinishedPulling="2025-07-12 00:09:25.448416667 +0000 UTC m=+40.619126803" observedRunningTime="2025-07-12 00:09:27.142053201 +0000 UTC m=+42.312763337" watchObservedRunningTime="2025-07-12 00:09:27.156800425 +0000 UTC m=+42.327510561" Jul 12 00:09:27.225625 systemd[1]: Created slice kubepods-besteffort-pod14d111b6_25f1_455d_82cd_fa2e28de89cb.slice - libcontainer container kubepods-besteffort-pod14d111b6_25f1_455d_82cd_fa2e28de89cb.slice. Jul 12 00:09:27.289489 kubelet[3127]: I0712 00:09:27.289437 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14d111b6-25f1-455d-82cd-fa2e28de89cb-whisker-backend-key-pair\") pod \"whisker-6b8d7d88db-k4nqm\" (UID: \"14d111b6-25f1-455d-82cd-fa2e28de89cb\") " pod="calico-system/whisker-6b8d7d88db-k4nqm" Jul 12 00:09:27.289489 kubelet[3127]: I0712 00:09:27.289492 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14d111b6-25f1-455d-82cd-fa2e28de89cb-whisker-ca-bundle\") pod \"whisker-6b8d7d88db-k4nqm\" (UID: \"14d111b6-25f1-455d-82cd-fa2e28de89cb\") " pod="calico-system/whisker-6b8d7d88db-k4nqm" Jul 12 00:09:27.289654 kubelet[3127]: I0712 00:09:27.289512 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrdr\" (UniqueName: \"kubernetes.io/projected/14d111b6-25f1-455d-82cd-fa2e28de89cb-kube-api-access-cgrdr\") pod \"whisker-6b8d7d88db-k4nqm\" (UID: \"14d111b6-25f1-455d-82cd-fa2e28de89cb\") " pod="calico-system/whisker-6b8d7d88db-k4nqm" Jul 12 00:09:27.532742 containerd[1748]: time="2025-07-12T00:09:27.532676219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8d7d88db-k4nqm,Uid:14d111b6-25f1-455d-82cd-fa2e28de89cb,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:27.695555 systemd-networkd[1562]: calieea4ef09362: Link UP Jul 12 00:09:27.695697 systemd-networkd[1562]: calieea4ef09362: Gained carrier Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.616 [INFO][4471] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.629 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0 whisker-6b8d7d88db- calico-system 14d111b6-25f1-455d-82cd-fa2e28de89cb 931 0 2025-07-12 00:09:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b8d7d88db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 whisker-6b8d7d88db-k4nqm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieea4ef09362 [] [] }} ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.629 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.651 [INFO][4484] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" HandleID="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.651 [INFO][4484] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" HandleID="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-047a586f92", "pod":"whisker-6b8d7d88db-k4nqm", "timestamp":"2025-07-12 00:09:27.651478931 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.651 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.651 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.651 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.660 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.665 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.669 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.670 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.672 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.672 [INFO][4484] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.675 [INFO][4484] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.680 [INFO][4484] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.686 [INFO][4484] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.193/26] block=192.168.35.192/26 handle="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.686 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.193/26] handle="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.687 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:27.713877 containerd[1748]: 2025-07-12 00:09:27.687 [INFO][4484] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.193/26] IPv6=[] ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" HandleID="k8s-pod-network.df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" Jul 12 00:09:27.714688 containerd[1748]: 2025-07-12 00:09:27.689 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0", GenerateName:"whisker-6b8d7d88db-", Namespace:"calico-system", SelfLink:"", UID:"14d111b6-25f1-455d-82cd-fa2e28de89cb", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b8d7d88db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"whisker-6b8d7d88db-k4nqm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieea4ef09362", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:27.714688 containerd[1748]: 2025-07-12 00:09:27.689 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.193/32] ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" Jul 12 00:09:27.714688 containerd[1748]: 2025-07-12 00:09:27.689 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieea4ef09362 ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" Jul 12 00:09:27.714688 containerd[1748]: 2025-07-12 00:09:27.696 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" Jul 12 00:09:27.714688 containerd[1748]: 2025-07-12 00:09:27.696 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0", GenerateName:"whisker-6b8d7d88db-", Namespace:"calico-system", SelfLink:"", UID:"14d111b6-25f1-455d-82cd-fa2e28de89cb", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b8d7d88db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e", Pod:"whisker-6b8d7d88db-k4nqm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieea4ef09362", MAC:"a6:2b:16:0a:65:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:27.714688 containerd[1748]: 2025-07-12 00:09:27.710 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e" Namespace="calico-system" Pod="whisker-6b8d7d88db-k4nqm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--6b8d7d88db--k4nqm-eth0" Jul 12 00:09:27.737470 containerd[1748]: time="2025-07-12T00:09:27.737193999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:27.737727 containerd[1748]: time="2025-07-12T00:09:27.737659438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:27.737903 containerd[1748]: time="2025-07-12T00:09:27.737686078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:27.737903 containerd[1748]: time="2025-07-12T00:09:27.737820198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:27.752494 systemd[1]: Started cri-containerd-df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e.scope - libcontainer container df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e. Jul 12 00:09:27.783990 containerd[1748]: time="2025-07-12T00:09:27.782961750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8d7d88db-k4nqm,Uid:14d111b6-25f1-455d-82cd-fa2e28de89cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e\"" Jul 12 00:09:27.787878 containerd[1748]: time="2025-07-12T00:09:27.787792824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:09:28.714505 systemd-networkd[1562]: calieea4ef09362: Gained IPv6LL Jul 12 00:09:28.933125 kubelet[3127]: I0712 00:09:28.932940 3127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71cc5b23-9cc4-431b-8900-01dd686edb1f" path="/var/lib/kubelet/pods/71cc5b23-9cc4-431b-8900-01dd686edb1f/volumes" Jul 12 00:09:29.431003 containerd[1748]: time="2025-07-12T00:09:29.430946333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:29.435623 containerd[1748]: time="2025-07-12T00:09:29.435466808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:09:29.445306 containerd[1748]: time="2025-07-12T00:09:29.444821518Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:29.451689 containerd[1748]: time="2025-07-12T00:09:29.451645631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:29.452638 containerd[1748]: time="2025-07-12T00:09:29.452602390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.664503406s" Jul 12 00:09:29.452709 containerd[1748]: time="2025-07-12T00:09:29.452638390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:09:29.460907 containerd[1748]: time="2025-07-12T00:09:29.460861021Z" level=info msg="CreateContainer within sandbox \"df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:09:29.529846 containerd[1748]: time="2025-07-12T00:09:29.529763066Z" level=info msg="CreateContainer within sandbox \"df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"adcd073afcc6d13e43f21846e338793805c6609474f8a740b88e8cc99bcf4ed5\"" Jul 12 00:09:29.530647 containerd[1748]: time="2025-07-12T00:09:29.530366346Z" level=info msg="StartContainer for \"adcd073afcc6d13e43f21846e338793805c6609474f8a740b88e8cc99bcf4ed5\"" Jul 12 00:09:29.558086 systemd[1]: run-containerd-runc-k8s.io-adcd073afcc6d13e43f21846e338793805c6609474f8a740b88e8cc99bcf4ed5-runc.gNdDZm.mount: Deactivated successfully. Jul 12 00:09:29.565415 systemd[1]: Started cri-containerd-adcd073afcc6d13e43f21846e338793805c6609474f8a740b88e8cc99bcf4ed5.scope - libcontainer container adcd073afcc6d13e43f21846e338793805c6609474f8a740b88e8cc99bcf4ed5. Jul 12 00:09:29.600214 containerd[1748]: time="2025-07-12T00:09:29.600167471Z" level=info msg="StartContainer for \"adcd073afcc6d13e43f21846e338793805c6609474f8a740b88e8cc99bcf4ed5\" returns successfully" Jul 12 00:09:29.601316 containerd[1748]: time="2025-07-12T00:09:29.601289709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:09:31.253903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494016473.mount: Deactivated successfully. Jul 12 00:09:31.336988 containerd[1748]: time="2025-07-12T00:09:31.336211679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:31.339849 containerd[1748]: time="2025-07-12T00:09:31.339817715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:09:31.346968 containerd[1748]: time="2025-07-12T00:09:31.346924147Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:31.355311 containerd[1748]: time="2025-07-12T00:09:31.355274498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:31.356341 containerd[1748]: time="2025-07-12T00:09:31.356214337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.754707548s" Jul 12 00:09:31.356341 containerd[1748]: time="2025-07-12T00:09:31.356248257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:09:31.373356 containerd[1748]: time="2025-07-12T00:09:31.373320359Z" level=info msg="CreateContainer within sandbox \"df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:09:31.424162 containerd[1748]: time="2025-07-12T00:09:31.424085424Z" level=info msg="CreateContainer within sandbox \"df6f1a19a3179daf98df85acc9a20a3fd6ba4115f2a5def03d5e4c8c362dee5e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f171de1148bcd63d9041f3640d803b9ebdda6f2a404c1034397570cffd87a40b\"" Jul 12 00:09:31.424857 containerd[1748]: time="2025-07-12T00:09:31.424739743Z" level=info msg="StartContainer for \"f171de1148bcd63d9041f3640d803b9ebdda6f2a404c1034397570cffd87a40b\"" Jul 12 00:09:31.463438 systemd[1]: Started cri-containerd-f171de1148bcd63d9041f3640d803b9ebdda6f2a404c1034397570cffd87a40b.scope - libcontainer container f171de1148bcd63d9041f3640d803b9ebdda6f2a404c1034397570cffd87a40b. Jul 12 00:09:31.511537 containerd[1748]: time="2025-07-12T00:09:31.510939451Z" level=info msg="StartContainer for \"f171de1148bcd63d9041f3640d803b9ebdda6f2a404c1034397570cffd87a40b\" returns successfully" Jul 12 00:09:32.154466 kubelet[3127]: I0712 00:09:32.154383 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b8d7d88db-k4nqm" podStartSLOduration=1.5746949099999998 podStartE2EDuration="5.154366011s" podCreationTimestamp="2025-07-12 00:09:27 +0000 UTC" firstStartedPulling="2025-07-12 00:09:27.785544107 +0000 UTC m=+42.956254243" lastFinishedPulling="2025-07-12 00:09:31.365215168 +0000 UTC m=+46.535925344" observedRunningTime="2025-07-12 00:09:32.153171972 +0000 UTC m=+47.323882068" watchObservedRunningTime="2025-07-12 00:09:32.154366011 +0000 UTC m=+47.325076147" Jul 12 00:09:32.934558 containerd[1748]: time="2025-07-12T00:09:32.934399762Z" level=info msg="StopPodSandbox for \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\"" Jul 12 00:09:32.936546 containerd[1748]: time="2025-07-12T00:09:32.935212001Z" level=info msg="StopPodSandbox for \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\"" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.002 [INFO][4821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.003 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" iface="eth0" netns="/var/run/netns/cni-5a7e8291-cf8a-d736-6f88-0fa9b22e2e90" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.003 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" iface="eth0" netns="/var/run/netns/cni-5a7e8291-cf8a-d736-6f88-0fa9b22e2e90" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.004 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" iface="eth0" netns="/var/run/netns/cni-5a7e8291-cf8a-d736-6f88-0fa9b22e2e90" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.004 [INFO][4821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.004 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.030 [INFO][4834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.030 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.030 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.039 [WARNING][4834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.039 [INFO][4834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.041 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:33.045353 containerd[1748]: 2025-07-12 00:09:33.043 [INFO][4821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:33.050513 containerd[1748]: time="2025-07-12T00:09:33.045571956Z" level=info msg="TearDown network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\" successfully" Jul 12 00:09:33.050513 containerd[1748]: time="2025-07-12T00:09:33.045609276Z" level=info msg="StopPodSandbox for \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\" returns successfully" Jul 12 00:09:33.048096 systemd[1]: run-netns-cni\x2d5a7e8291\x2dcf8a\x2dd736\x2d6f88\x2d0fa9b22e2e90.mount: Deactivated successfully. Jul 12 00:09:33.050945 containerd[1748]: time="2025-07-12T00:09:33.050908430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd8cd9c6-dfxmq,Uid:33b79413-179b-4bb3-828a-0fadd7c84383,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.015 [INFO][4817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.015 [INFO][4817] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" iface="eth0" netns="/var/run/netns/cni-f6b10089-8142-9f73-d2aa-befa9e8abecc" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.016 [INFO][4817] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" iface="eth0" netns="/var/run/netns/cni-f6b10089-8142-9f73-d2aa-befa9e8abecc" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.016 [INFO][4817] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" iface="eth0" netns="/var/run/netns/cni-f6b10089-8142-9f73-d2aa-befa9e8abecc" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.016 [INFO][4817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.016 [INFO][4817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.036 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.036 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.041 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.057 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.057 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.058 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:33.061886 containerd[1748]: 2025-07-12 00:09:33.060 [INFO][4817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:33.064575 containerd[1748]: time="2025-07-12T00:09:33.062028897Z" level=info msg="TearDown network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\" successfully" Jul 12 00:09:33.064575 containerd[1748]: time="2025-07-12T00:09:33.062049217Z" level=info msg="StopPodSandbox for \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\" returns successfully" Jul 12 00:09:33.064575 containerd[1748]: time="2025-07-12T00:09:33.064356414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-sgjtg,Uid:9c5661c3-a6ae-4537-a0fb-159b28c4d8b2,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:09:33.066399 systemd[1]: run-netns-cni\x2df6b10089\x2d8142\x2d9f73\x2dd2aa\x2dbefa9e8abecc.mount: Deactivated successfully. Jul 12 00:09:33.247350 systemd-networkd[1562]: cali36c2931254e: Link UP Jul 12 00:09:33.248653 systemd-networkd[1562]: cali36c2931254e: Gained carrier Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.144 [INFO][4849] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.163 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0 calico-kube-controllers-64fd8cd9c6- calico-system 33b79413-179b-4bb3-828a-0fadd7c84383 961 0 2025-07-12 00:09:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64fd8cd9c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 calico-kube-controllers-64fd8cd9c6-dfxmq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali36c2931254e [] [] }} ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.163 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.191 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" HandleID="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.191 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" HandleID="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-047a586f92", "pod":"calico-kube-controllers-64fd8cd9c6-dfxmq", "timestamp":"2025-07-12 00:09:33.191445549 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.191 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.191 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.191 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.206 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.210 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.215 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.218 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.220 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.220 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.222 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8 Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.229 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.238 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.194/26] block=192.168.35.192/26 handle="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.238 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.194/26] handle="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.239 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:33.272159 containerd[1748]: 2025-07-12 00:09:33.239 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.194/26] IPv6=[] ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" HandleID="k8s-pod-network.f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.272762 containerd[1748]: 2025-07-12 00:09:33.243 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0", GenerateName:"calico-kube-controllers-64fd8cd9c6-", Namespace:"calico-system", SelfLink:"", UID:"33b79413-179b-4bb3-828a-0fadd7c84383", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd8cd9c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"calico-kube-controllers-64fd8cd9c6-dfxmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36c2931254e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:33.272762 containerd[1748]: 2025-07-12 00:09:33.243 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.194/32] ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.272762 containerd[1748]: 2025-07-12 00:09:33.243 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36c2931254e ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.272762 containerd[1748]: 2025-07-12 00:09:33.250 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.272762 containerd[1748]: 2025-07-12 00:09:33.251 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0", GenerateName:"calico-kube-controllers-64fd8cd9c6-", Namespace:"calico-system", SelfLink:"", UID:"33b79413-179b-4bb3-828a-0fadd7c84383", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd8cd9c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8", Pod:"calico-kube-controllers-64fd8cd9c6-dfxmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36c2931254e", MAC:"4a:a8:7b:ce:c5:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:33.272762 containerd[1748]: 2025-07-12 00:09:33.269 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8" Namespace="calico-system" Pod="calico-kube-controllers-64fd8cd9c6-dfxmq" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:33.291905 containerd[1748]: time="2025-07-12T00:09:33.291756195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:33.291905 containerd[1748]: time="2025-07-12T00:09:33.291812195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:33.291905 containerd[1748]: time="2025-07-12T00:09:33.291827555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:33.292367 containerd[1748]: time="2025-07-12T00:09:33.291911595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:33.309421 systemd[1]: Started cri-containerd-f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8.scope - libcontainer container f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8. Jul 12 00:09:33.353007 containerd[1748]: time="2025-07-12T00:09:33.352950846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fd8cd9c6-dfxmq,Uid:33b79413-179b-4bb3-828a-0fadd7c84383,Namespace:calico-system,Attempt:1,} returns sandbox id \"f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8\"" Jul 12 00:09:33.354910 systemd-networkd[1562]: calidefe61e03cd: Link UP Jul 12 00:09:33.358471 systemd-networkd[1562]: calidefe61e03cd: Gained carrier Jul 12 00:09:33.362224 containerd[1748]: time="2025-07-12T00:09:33.361466436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.176 [INFO][4861] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.190 [INFO][4861] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0 calico-apiserver-7fc958d55f- calico-apiserver 9c5661c3-a6ae-4537-a0fb-159b28c4d8b2 962 0 2025-07-12 00:09:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fc958d55f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 calico-apiserver-7fc958d55f-sgjtg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidefe61e03cd [] [] }} ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.191 [INFO][4861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.229 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.230 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-047a586f92", "pod":"calico-apiserver-7fc958d55f-sgjtg", "timestamp":"2025-07-12 00:09:33.229631426 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.230 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.239 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.239 [INFO][4880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.307 [INFO][4880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.313 [INFO][4880] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.318 [INFO][4880] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.320 [INFO][4880] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.323 [INFO][4880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.323 [INFO][4880] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.326 [INFO][4880] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569 Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.334 [INFO][4880] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.346 [INFO][4880] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.195/26] block=192.168.35.192/26 handle="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.347 [INFO][4880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.195/26] handle="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.347 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:33.375528 containerd[1748]: 2025-07-12 00:09:33.347 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.195/26] IPv6=[] ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.376588 containerd[1748]: 2025-07-12 00:09:33.351 [INFO][4861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"calico-apiserver-7fc958d55f-sgjtg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidefe61e03cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:33.376588 containerd[1748]: 2025-07-12 00:09:33.351 [INFO][4861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.195/32] ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.376588 containerd[1748]: 2025-07-12 00:09:33.351 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidefe61e03cd ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.376588 containerd[1748]: 2025-07-12 00:09:33.355 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.376588 containerd[1748]: 2025-07-12 00:09:33.355 [INFO][4861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569", Pod:"calico-apiserver-7fc958d55f-sgjtg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidefe61e03cd", MAC:"7e:46:c7:5d:89:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:33.376588 containerd[1748]: 2025-07-12 00:09:33.374 [INFO][4861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-sgjtg" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:33.398652 containerd[1748]: time="2025-07-12T00:09:33.398542114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:33.398652 containerd[1748]: time="2025-07-12T00:09:33.398595674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:33.398652 containerd[1748]: time="2025-07-12T00:09:33.398620074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:33.399086 containerd[1748]: time="2025-07-12T00:09:33.398747713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:33.415457 systemd[1]: Started cri-containerd-5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569.scope - libcontainer container 5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569. Jul 12 00:09:33.442980 containerd[1748]: time="2025-07-12T00:09:33.442923703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-sgjtg,Uid:9c5661c3-a6ae-4537-a0fb-159b28c4d8b2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\"" Jul 12 00:09:33.930045 containerd[1748]: time="2025-07-12T00:09:33.929988908Z" level=info msg="StopPodSandbox for \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\"" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.974 [INFO][5008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.975 [INFO][5008] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" iface="eth0" netns="/var/run/netns/cni-87fcda95-d508-29e9-1d75-890e82bf5f4e" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.975 [INFO][5008] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" iface="eth0" netns="/var/run/netns/cni-87fcda95-d508-29e9-1d75-890e82bf5f4e" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.976 [INFO][5008] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" iface="eth0" netns="/var/run/netns/cni-87fcda95-d508-29e9-1d75-890e82bf5f4e" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.976 [INFO][5008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.977 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.996 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.996 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:33.996 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:34.005 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:34.005 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:34.006 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:34.009186 containerd[1748]: 2025-07-12 00:09:34.007 [INFO][5008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:34.010655 containerd[1748]: time="2025-07-12T00:09:34.009300538Z" level=info msg="TearDown network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\" successfully" Jul 12 00:09:34.010655 containerd[1748]: time="2025-07-12T00:09:34.009325818Z" level=info msg="StopPodSandbox for \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\" returns successfully" Jul 12 00:09:34.010655 containerd[1748]: time="2025-07-12T00:09:34.010041297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f66ccc75-c2rbd,Uid:c64c54d7-e2ea-42e7-9d83-27f006d7ff1f,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:09:34.050617 systemd[1]: run-netns-cni\x2d87fcda95\x2dd508\x2d29e9\x2d1d75\x2d890e82bf5f4e.mount: Deactivated successfully. Jul 12 00:09:34.181221 systemd-networkd[1562]: cali45fb9f88c66: Link UP Jul 12 00:09:34.181957 systemd-networkd[1562]: cali45fb9f88c66: Gained carrier Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.093 [INFO][5024] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.106 [INFO][5024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0 calico-apiserver-79f66ccc75- calico-apiserver c64c54d7-e2ea-42e7-9d83-27f006d7ff1f 974 0 2025-07-12 00:09:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f66ccc75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 calico-apiserver-79f66ccc75-c2rbd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali45fb9f88c66 [] [] }} ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.106 [INFO][5024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.128 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" HandleID="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.128 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" HandleID="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-047a586f92", "pod":"calico-apiserver-79f66ccc75-c2rbd", "timestamp":"2025-07-12 00:09:34.128451922 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.128 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.128 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.128 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.138 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.143 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.151 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.153 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.156 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.156 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.158 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.163 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.172 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.196/26] block=192.168.35.192/26 handle="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.173 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.196/26] handle="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.173 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:34.200763 containerd[1748]: 2025-07-12 00:09:34.173 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.196/26] IPv6=[] ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" HandleID="k8s-pod-network.cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.201523 containerd[1748]: 2025-07-12 00:09:34.175 [INFO][5024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0", GenerateName:"calico-apiserver-79f66ccc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f66ccc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"calico-apiserver-79f66ccc75-c2rbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45fb9f88c66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:34.201523 containerd[1748]: 2025-07-12 00:09:34.175 [INFO][5024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.196/32] ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.201523 containerd[1748]: 2025-07-12 00:09:34.175 [INFO][5024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45fb9f88c66 ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.201523 containerd[1748]: 2025-07-12 00:09:34.183 [INFO][5024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.201523 containerd[1748]: 2025-07-12 00:09:34.183 [INFO][5024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0", GenerateName:"calico-apiserver-79f66ccc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f66ccc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac", Pod:"calico-apiserver-79f66ccc75-c2rbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45fb9f88c66", MAC:"76:ea:3a:e8:7b:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:34.201523 containerd[1748]: 2025-07-12 00:09:34.199 [INFO][5024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-c2rbd" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:34.228375 containerd[1748]: time="2025-07-12T00:09:34.228243449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:34.228468 containerd[1748]: time="2025-07-12T00:09:34.228392768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:34.228468 containerd[1748]: time="2025-07-12T00:09:34.228420808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:34.228671 containerd[1748]: time="2025-07-12T00:09:34.228610808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:34.251419 systemd[1]: Started cri-containerd-cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac.scope - libcontainer container cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac. Jul 12 00:09:34.282088 containerd[1748]: time="2025-07-12T00:09:34.282052067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f66ccc75-c2rbd,Uid:c64c54d7-e2ea-42e7-9d83-27f006d7ff1f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac\"" Jul 12 00:09:34.656141 kubelet[3127]: I0712 00:09:34.656006 3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:34.794462 systemd-networkd[1562]: calidefe61e03cd: Gained IPv6LL Jul 12 00:09:34.932288 containerd[1748]: time="2025-07-12T00:09:34.931888047Z" level=info msg="StopPodSandbox for \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\"" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:34.987 [INFO][5131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:34.988 [INFO][5131] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" iface="eth0" netns="/var/run/netns/cni-301f8ede-e0a5-0a60-debc-b26161c78b2c" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:34.989 [INFO][5131] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" iface="eth0" netns="/var/run/netns/cni-301f8ede-e0a5-0a60-debc-b26161c78b2c" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:34.989 [INFO][5131] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" iface="eth0" netns="/var/run/netns/cni-301f8ede-e0a5-0a60-debc-b26161c78b2c" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:34.989 [INFO][5131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:34.989 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:35.013 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:35.014 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:35.014 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:35.022 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:35.023 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:35.024 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:35.027738 containerd[1748]: 2025-07-12 00:09:35.025 [INFO][5131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:35.027738 containerd[1748]: time="2025-07-12T00:09:35.027405738Z" level=info msg="TearDown network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\" successfully" Jul 12 00:09:35.027738 containerd[1748]: time="2025-07-12T00:09:35.027472298Z" level=info msg="StopPodSandbox for \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\" returns successfully" Jul 12 00:09:35.031972 containerd[1748]: time="2025-07-12T00:09:35.031933173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26mjl,Uid:0465de75-2781-421a-b1c8-807d08b402b9,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:35.048693 systemd[1]: run-netns-cni\x2d301f8ede\x2de0a5\x2d0a60\x2ddebc\x2db26161c78b2c.mount: Deactivated successfully. Jul 12 00:09:35.050415 systemd-networkd[1562]: cali36c2931254e: Gained IPv6LL Jul 12 00:09:35.221004 systemd-networkd[1562]: cali61478a4999b: Link UP Jul 12 00:09:35.221889 systemd-networkd[1562]: cali61478a4999b: Gained carrier Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.135 [INFO][5146] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.150 [INFO][5146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0 csi-node-driver- calico-system 0465de75-2781-421a-b1c8-807d08b402b9 989 0 2025-07-12 00:09:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 csi-node-driver-26mjl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali61478a4999b [] [] }} ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.150 [INFO][5146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.175 [INFO][5160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" HandleID="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.175 [INFO][5160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" HandleID="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-047a586f92", "pod":"csi-node-driver-26mjl", "timestamp":"2025-07-12 00:09:35.17541621 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.175 [INFO][5160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.175 [INFO][5160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.175 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.184 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.188 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.193 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.194 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.196 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.196 [INFO][5160] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.198 [INFO][5160] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0 Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.206 [INFO][5160] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.216 [INFO][5160] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.197/26] block=192.168.35.192/26 handle="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.216 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.197/26] handle="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.216 [INFO][5160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:35.241627 containerd[1748]: 2025-07-12 00:09:35.216 [INFO][5160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.197/26] IPv6=[] ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" HandleID="k8s-pod-network.b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.243419 containerd[1748]: 2025-07-12 00:09:35.218 [INFO][5146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0465de75-2781-421a-b1c8-807d08b402b9", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"csi-node-driver-26mjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61478a4999b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:35.243419 containerd[1748]: 2025-07-12 00:09:35.218 [INFO][5146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.197/32] ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.243419 containerd[1748]: 2025-07-12 00:09:35.218 [INFO][5146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61478a4999b ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.243419 containerd[1748]: 2025-07-12 00:09:35.222 [INFO][5146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.243419 containerd[1748]: 2025-07-12 00:09:35.222 [INFO][5146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0465de75-2781-421a-b1c8-807d08b402b9", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0", Pod:"csi-node-driver-26mjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61478a4999b", MAC:"36:94:21:1e:9a:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:35.243419 containerd[1748]: 2025-07-12 00:09:35.238 [INFO][5146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0" Namespace="calico-system" Pod="csi-node-driver-26mjl" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:35.263031 containerd[1748]: time="2025-07-12T00:09:35.262392631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:35.263031 containerd[1748]: time="2025-07-12T00:09:35.262450431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:35.263031 containerd[1748]: time="2025-07-12T00:09:35.262464511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:35.263031 containerd[1748]: time="2025-07-12T00:09:35.262538551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:35.284424 systemd[1]: Started cri-containerd-b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0.scope - libcontainer container b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0. Jul 12 00:09:35.305753 containerd[1748]: time="2025-07-12T00:09:35.305703261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-26mjl,Uid:0465de75-2781-421a-b1c8-807d08b402b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0\"" Jul 12 00:09:35.555338 kernel: bpftool[5230]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:09:35.690583 systemd-networkd[1562]: cali45fb9f88c66: Gained IPv6LL Jul 12 00:09:35.931924 containerd[1748]: time="2025-07-12T00:09:35.931867828Z" level=info msg="StopPodSandbox for \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\"" Jul 12 00:09:35.932486 containerd[1748]: time="2025-07-12T00:09:35.932264908Z" level=info msg="StopPodSandbox for \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\"" Jul 12 00:09:35.933556 containerd[1748]: time="2025-07-12T00:09:35.933336227Z" level=info msg="StopPodSandbox for \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\"" Jul 12 00:09:36.002975 systemd-networkd[1562]: vxlan.calico: Link UP Jul 12 00:09:36.003574 systemd-networkd[1562]: vxlan.calico: Gained carrier Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.036 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.039 [INFO][5270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" iface="eth0" netns="/var/run/netns/cni-4b7d76e1-6007-b449-be3a-b6633dc647b5" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.039 [INFO][5270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" iface="eth0" netns="/var/run/netns/cni-4b7d76e1-6007-b449-be3a-b6633dc647b5" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.041 [INFO][5270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" iface="eth0" netns="/var/run/netns/cni-4b7d76e1-6007-b449-be3a-b6633dc647b5" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.042 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.042 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.144 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.144 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.144 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.155 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.155 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.158 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.168128 containerd[1748]: 2025-07-12 00:09:36.162 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:36.171084 containerd[1748]: time="2025-07-12T00:09:36.170363197Z" level=info msg="TearDown network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\" successfully" Jul 12 00:09:36.173285 containerd[1748]: time="2025-07-12T00:09:36.170388957Z" level=info msg="StopPodSandbox for \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\" returns successfully" Jul 12 00:09:36.175369 systemd[1]: run-netns-cni\x2d4b7d76e1\x2d6007\x2db449\x2dbe3a\x2db6633dc647b5.mount: Deactivated successfully. Jul 12 00:09:36.179821 containerd[1748]: time="2025-07-12T00:09:36.179789626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lhhr,Uid:664d0b79-40b5-4d9c-8498-5a2e2d35a983,Namespace:kube-system,Attempt:1,}" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.026 [INFO][5278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.028 [INFO][5278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" iface="eth0" netns="/var/run/netns/cni-f27fa67d-3510-37a3-e0fd-64277263f9b6" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.028 [INFO][5278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" iface="eth0" netns="/var/run/netns/cni-f27fa67d-3510-37a3-e0fd-64277263f9b6" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.029 [INFO][5278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" iface="eth0" netns="/var/run/netns/cni-f27fa67d-3510-37a3-e0fd-64277263f9b6" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.029 [INFO][5278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.029 [INFO][5278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.149 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.149 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.158 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.181 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.181 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.189 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.205495 containerd[1748]: 2025-07-12 00:09:36.201 [INFO][5278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:36.210610 containerd[1748]: time="2025-07-12T00:09:36.209331312Z" level=info msg="TearDown network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\" successfully" Jul 12 00:09:36.210610 containerd[1748]: time="2025-07-12T00:09:36.209362912Z" level=info msg="StopPodSandbox for \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\" returns successfully" Jul 12 00:09:36.210104 systemd[1]: run-netns-cni\x2df27fa67d\x2d3510\x2d37a3\x2de0fd\x2d64277263f9b6.mount: Deactivated successfully. Jul 12 00:09:36.214045 containerd[1748]: time="2025-07-12T00:09:36.214008547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4sphm,Uid:bd421362-01fe-4241-bb04-72d4085cf927,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.038 [INFO][5269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.039 [INFO][5269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" iface="eth0" netns="/var/run/netns/cni-4b615ab2-dc24-43e2-1bf0-b6ed0683dddd" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.040 [INFO][5269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" iface="eth0" netns="/var/run/netns/cni-4b615ab2-dc24-43e2-1bf0-b6ed0683dddd" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.041 [INFO][5269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" iface="eth0" netns="/var/run/netns/cni-4b615ab2-dc24-43e2-1bf0-b6ed0683dddd" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.041 [INFO][5269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.042 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.157 [INFO][5313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.158 [INFO][5313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.189 [INFO][5313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.230 [WARNING][5313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.230 [INFO][5313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.233 [INFO][5313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.248615 containerd[1748]: 2025-07-12 00:09:36.240 [INFO][5269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:36.250020 containerd[1748]: time="2025-07-12T00:09:36.249959106Z" level=info msg="TearDown network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\" successfully" Jul 12 00:09:36.250187 containerd[1748]: time="2025-07-12T00:09:36.250080426Z" level=info msg="StopPodSandbox for \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\" returns successfully" Jul 12 00:09:36.252667 containerd[1748]: time="2025-07-12T00:09:36.252462623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-s85hw,Uid:f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:09:36.253940 systemd[1]: run-netns-cni\x2d4b615ab2\x2ddc24\x2d43e2\x2d1bf0\x2db6ed0683dddd.mount: Deactivated successfully. Jul 12 00:09:36.566886 systemd-networkd[1562]: cali7201d1eddc8: Link UP Jul 12 00:09:36.568812 systemd-networkd[1562]: cali7201d1eddc8: Gained carrier Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.397 [INFO][5347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0 coredns-674b8bbfcf- kube-system 664d0b79-40b5-4d9c-8498-5a2e2d35a983 999 0 2025-07-12 00:08:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 coredns-674b8bbfcf-6lhhr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7201d1eddc8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.398 [INFO][5347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.474 [INFO][5393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" HandleID="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.474 [INFO][5393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" HandleID="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-n-047a586f92", "pod":"coredns-674b8bbfcf-6lhhr", "timestamp":"2025-07-12 00:09:36.472895412 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.474 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.474 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.474 [INFO][5393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.499 [INFO][5393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.508 [INFO][5393] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.525 [INFO][5393] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.529 [INFO][5393] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.532 [INFO][5393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.533 [INFO][5393] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.534 [INFO][5393] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6 Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.543 [INFO][5393] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.554 [INFO][5393] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.198/26] block=192.168.35.192/26 handle="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.554 [INFO][5393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.198/26] handle="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.554 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.598002 containerd[1748]: 2025-07-12 00:09:36.554 [INFO][5393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.198/26] IPv6=[] ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" HandleID="k8s-pod-network.762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.599378 containerd[1748]: 2025-07-12 00:09:36.562 [INFO][5347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"664d0b79-40b5-4d9c-8498-5a2e2d35a983", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"coredns-674b8bbfcf-6lhhr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7201d1eddc8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.599378 containerd[1748]: 2025-07-12 00:09:36.562 [INFO][5347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.198/32] ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.599378 containerd[1748]: 2025-07-12 00:09:36.562 [INFO][5347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7201d1eddc8 ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.599378 containerd[1748]: 2025-07-12 00:09:36.567 [INFO][5347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.599378 containerd[1748]: 2025-07-12 00:09:36.568 [INFO][5347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"664d0b79-40b5-4d9c-8498-5a2e2d35a983", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6", Pod:"coredns-674b8bbfcf-6lhhr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7201d1eddc8", MAC:"82:ff:5c:02:32:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.599378 containerd[1748]: 2025-07-12 00:09:36.594 [INFO][5347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lhhr" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:36.665416 containerd[1748]: time="2025-07-12T00:09:36.665340553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:36.665861 containerd[1748]: time="2025-07-12T00:09:36.665724912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:36.666168 containerd[1748]: time="2025-07-12T00:09:36.666134552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:36.668835 containerd[1748]: time="2025-07-12T00:09:36.667756110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:36.684676 systemd-networkd[1562]: calie17d4eee484: Link UP Jul 12 00:09:36.687502 systemd-networkd[1562]: calie17d4eee484: Gained carrier Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.409 [INFO][5361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0 goldmane-768f4c5c69- calico-system bd421362-01fe-4241-bb04-72d4085cf927 998 0 2025-07-12 00:09:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 goldmane-768f4c5c69-4sphm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie17d4eee484 [] [] }} ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.409 [INFO][5361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.476 [INFO][5398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" HandleID="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.476 [INFO][5398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" HandleID="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cfa90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-047a586f92", "pod":"goldmane-768f4c5c69-4sphm", "timestamp":"2025-07-12 00:09:36.476346808 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.477 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.555 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.556 [INFO][5398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.601 [INFO][5398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.608 [INFO][5398] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.619 [INFO][5398] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.621 [INFO][5398] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.625 [INFO][5398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.625 [INFO][5398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.627 [INFO][5398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.638 [INFO][5398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.670 [INFO][5398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.199/26] block=192.168.35.192/26 handle="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.671 [INFO][5398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.199/26] handle="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.672 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.716307 containerd[1748]: 2025-07-12 00:09:36.672 [INFO][5398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.199/26] IPv6=[] ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" HandleID="k8s-pod-network.a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.716859 containerd[1748]: 2025-07-12 00:09:36.679 [INFO][5361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd421362-01fe-4241-bb04-72d4085cf927", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"goldmane-768f4c5c69-4sphm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie17d4eee484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.716859 containerd[1748]: 2025-07-12 00:09:36.679 [INFO][5361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.199/32] ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.716859 containerd[1748]: 2025-07-12 00:09:36.679 [INFO][5361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie17d4eee484 ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.716859 containerd[1748]: 2025-07-12 00:09:36.685 [INFO][5361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.716859 containerd[1748]: 2025-07-12 00:09:36.685 [INFO][5361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd421362-01fe-4241-bb04-72d4085cf927", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d", Pod:"goldmane-768f4c5c69-4sphm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie17d4eee484", MAC:"ae:81:41:fd:5f:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.716859 containerd[1748]: 2025-07-12 00:09:36.706 [INFO][5361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-4sphm" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:36.736439 systemd[1]: Started cri-containerd-762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6.scope - libcontainer container 762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6. Jul 12 00:09:36.770538 containerd[1748]: time="2025-07-12T00:09:36.770167434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:36.770538 containerd[1748]: time="2025-07-12T00:09:36.770244433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:36.770538 containerd[1748]: time="2025-07-12T00:09:36.770273273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:36.770538 containerd[1748]: time="2025-07-12T00:09:36.770353313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:36.797093 systemd[1]: Started cri-containerd-a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d.scope - libcontainer container a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d. Jul 12 00:09:36.826321 containerd[1748]: time="2025-07-12T00:09:36.826037930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lhhr,Uid:664d0b79-40b5-4d9c-8498-5a2e2d35a983,Namespace:kube-system,Attempt:1,} returns sandbox id \"762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6\"" Jul 12 00:09:36.839766 systemd-networkd[1562]: cali1a4c3c4da62: Link UP Jul 12 00:09:36.842750 systemd-networkd[1562]: cali1a4c3c4da62: Gained carrier Jul 12 00:09:36.845791 containerd[1748]: time="2025-07-12T00:09:36.845759707Z" level=info msg="CreateContainer within sandbox \"762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:09:36.846078 systemd-networkd[1562]: cali61478a4999b: Gained IPv6LL Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.476 [INFO][5381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0 calico-apiserver-7fc958d55f- calico-apiserver f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e 1000 0 2025-07-12 00:09:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fc958d55f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 calico-apiserver-7fc958d55f-s85hw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a4c3c4da62 [] [] }} ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.477 [INFO][5381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.527 [INFO][5411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.527 [INFO][5411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-047a586f92", "pod":"calico-apiserver-7fc958d55f-s85hw", "timestamp":"2025-07-12 00:09:36.52768767 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.528 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.672 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.673 [INFO][5411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.705 [INFO][5411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.723 [INFO][5411] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.745 [INFO][5411] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.750 [INFO][5411] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.772 [INFO][5411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.772 [INFO][5411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.780 [INFO][5411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7 Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.798 [INFO][5411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.817 [INFO][5411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.200/26] block=192.168.35.192/26 handle="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.817 [INFO][5411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.200/26] handle="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.817 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.880873 containerd[1748]: 2025-07-12 00:09:36.817 [INFO][5411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.200/26] IPv6=[] ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.881534 containerd[1748]: 2025-07-12 00:09:36.825 [INFO][5381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"calico-apiserver-7fc958d55f-s85hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a4c3c4da62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.881534 containerd[1748]: 2025-07-12 00:09:36.825 [INFO][5381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.200/32] ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.881534 containerd[1748]: 2025-07-12 00:09:36.825 [INFO][5381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a4c3c4da62 ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.881534 containerd[1748]: 2025-07-12 00:09:36.850 [INFO][5381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.881534 containerd[1748]: 2025-07-12 00:09:36.851 [INFO][5381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7", Pod:"calico-apiserver-7fc958d55f-s85hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a4c3c4da62", MAC:"82:ec:71:5e:da:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.881534 containerd[1748]: 2025-07-12 00:09:36.874 [INFO][5381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Namespace="calico-apiserver" Pod="calico-apiserver-7fc958d55f-s85hw" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:36.906633 containerd[1748]: time="2025-07-12T00:09:36.906583398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4sphm,Uid:bd421362-01fe-4241-bb04-72d4085cf927,Namespace:calico-system,Attempt:1,} returns sandbox id \"a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d\"" Jul 12 00:09:36.932028 containerd[1748]: time="2025-07-12T00:09:36.931892809Z" level=info msg="StopPodSandbox for \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\"" Jul 12 00:09:36.948688 containerd[1748]: time="2025-07-12T00:09:36.947243392Z" level=info msg="CreateContainer within sandbox \"762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0dab843fa02969b0ca327d0a9508fa8b231dd45a140b517d5f6a31a9f98d677\"" Jul 12 00:09:36.954331 containerd[1748]: time="2025-07-12T00:09:36.953069665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:36.954331 containerd[1748]: time="2025-07-12T00:09:36.953503905Z" level=info msg="StartContainer for \"b0dab843fa02969b0ca327d0a9508fa8b231dd45a140b517d5f6a31a9f98d677\"" Jul 12 00:09:36.956543 containerd[1748]: time="2025-07-12T00:09:36.953513305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:36.956543 containerd[1748]: time="2025-07-12T00:09:36.953531505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:36.956543 containerd[1748]: time="2025-07-12T00:09:36.954017184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:37.000485 systemd[1]: Started cri-containerd-0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7.scope - libcontainer container 0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7. Jul 12 00:09:37.016428 systemd[1]: Started cri-containerd-b0dab843fa02969b0ca327d0a9508fa8b231dd45a140b517d5f6a31a9f98d677.scope - libcontainer container b0dab843fa02969b0ca327d0a9508fa8b231dd45a140b517d5f6a31a9f98d677. Jul 12 00:09:37.103444 containerd[1748]: time="2025-07-12T00:09:37.101636936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc958d55f-s85hw,Uid:f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\"" Jul 12 00:09:37.112989 containerd[1748]: time="2025-07-12T00:09:37.112898603Z" level=info msg="StartContainer for \"b0dab843fa02969b0ca327d0a9508fa8b231dd45a140b517d5f6a31a9f98d677\" returns successfully" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.114 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.115 [INFO][5569] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" iface="eth0" netns="/var/run/netns/cni-b05f562a-19a9-4419-5a75-0e4ae690f409" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.115 [INFO][5569] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" iface="eth0" netns="/var/run/netns/cni-b05f562a-19a9-4419-5a75-0e4ae690f409" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.115 [INFO][5569] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" iface="eth0" netns="/var/run/netns/cni-b05f562a-19a9-4419-5a75-0e4ae690f409" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.115 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.115 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.152 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.152 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.152 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.165 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.165 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.167 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:37.186179 containerd[1748]: 2025-07-12 00:09:37.170 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:37.189002 containerd[1748]: time="2025-07-12T00:09:37.188140957Z" level=info msg="TearDown network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\" successfully" Jul 12 00:09:37.189002 containerd[1748]: time="2025-07-12T00:09:37.188177077Z" level=info msg="StopPodSandbox for \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\" returns successfully" Jul 12 00:09:37.191099 containerd[1748]: time="2025-07-12T00:09:37.189513556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvx6w,Uid:2c55f8f7-b85b-4e3f-974e-90f3a29f93c9,Namespace:kube-system,Attempt:1,}" Jul 12 00:09:37.190916 systemd[1]: run-netns-cni\x2db05f562a\x2d19a9\x2d4419\x2d5a75\x2d0e4ae690f409.mount: Deactivated successfully. Jul 12 00:09:37.528600 systemd-networkd[1562]: califfb9fbe2dcc: Link UP Jul 12 00:09:37.531708 systemd-networkd[1562]: califfb9fbe2dcc: Gained carrier Jul 12 00:09:37.571230 kubelet[3127]: I0712 00:09:37.571164 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6lhhr" podStartSLOduration=46.571145361 podStartE2EDuration="46.571145361s" podCreationTimestamp="2025-07-12 00:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:09:37.223993157 +0000 UTC m=+52.394703293" watchObservedRunningTime="2025-07-12 00:09:37.571145361 +0000 UTC m=+52.741855457" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.336 [INFO][5649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0 coredns-674b8bbfcf- kube-system 2c55f8f7-b85b-4e3f-974e-90f3a29f93c9 1020 0 2025-07-12 00:08:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 coredns-674b8bbfcf-jvx6w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califfb9fbe2dcc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.337 [INFO][5649] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.393 [INFO][5662] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" HandleID="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.393 [INFO][5662] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" HandleID="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-n-047a586f92", "pod":"coredns-674b8bbfcf-jvx6w", "timestamp":"2025-07-12 00:09:37.393335084 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.393 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.393 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.393 [INFO][5662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.416 [INFO][5662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.441 [INFO][5662] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.451 [INFO][5662] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.453 [INFO][5662] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.456 [INFO][5662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.456 [INFO][5662] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.458 [INFO][5662] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.480 [INFO][5662] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.509 [INFO][5662] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.201/26] block=192.168.35.192/26 handle="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.509 [INFO][5662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.201/26] handle="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" host="ci-4081.3.4-n-047a586f92" Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.509 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:37.576901 containerd[1748]: 2025-07-12 00:09:37.509 [INFO][5662] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.201/26] IPv6=[] ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" HandleID="k8s-pod-network.9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.577492 containerd[1748]: 2025-07-12 00:09:37.513 [INFO][5649] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"coredns-674b8bbfcf-jvx6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfb9fbe2dcc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:37.577492 containerd[1748]: 2025-07-12 00:09:37.514 [INFO][5649] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.201/32] ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.577492 containerd[1748]: 2025-07-12 00:09:37.514 [INFO][5649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfb9fbe2dcc ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.577492 containerd[1748]: 2025-07-12 00:09:37.534 [INFO][5649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.577492 containerd[1748]: 2025-07-12 00:09:37.538 [INFO][5649] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe", Pod:"coredns-674b8bbfcf-jvx6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfb9fbe2dcc", MAC:"8e:6c:b1:db:dc:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:37.577492 containerd[1748]: 2025-07-12 00:09:37.571 [INFO][5649] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvx6w" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:37.633265 containerd[1748]: time="2025-07-12T00:09:37.632694051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:37.633265 containerd[1748]: time="2025-07-12T00:09:37.632746531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:37.633265 containerd[1748]: time="2025-07-12T00:09:37.632761411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:37.633265 containerd[1748]: time="2025-07-12T00:09:37.632840251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:37.663648 systemd[1]: Started cri-containerd-9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe.scope - libcontainer container 9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe. Jul 12 00:09:37.718932 containerd[1748]: time="2025-07-12T00:09:37.718881633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvx6w,Uid:2c55f8f7-b85b-4e3f-974e-90f3a29f93c9,Namespace:kube-system,Attempt:1,} returns sandbox id \"9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe\"" Jul 12 00:09:37.739233 containerd[1748]: time="2025-07-12T00:09:37.738178891Z" level=info msg="CreateContainer within sandbox \"9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:09:37.738440 systemd-networkd[1562]: vxlan.calico: Gained IPv6LL Jul 12 00:09:37.823005 containerd[1748]: time="2025-07-12T00:09:37.822310315Z" level=info msg="CreateContainer within sandbox \"9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cac7d7a084ec2b5aaca633a8c09a12bdbb20b4d6e6b78220c48fe87f50d7f42f\"" Jul 12 00:09:37.823005 containerd[1748]: time="2025-07-12T00:09:37.822517875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:37.824690 containerd[1748]: time="2025-07-12T00:09:37.824653073Z" level=info msg="StartContainer for \"cac7d7a084ec2b5aaca633a8c09a12bdbb20b4d6e6b78220c48fe87f50d7f42f\"" Jul 12 00:09:37.832483 containerd[1748]: time="2025-07-12T00:09:37.832452424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:09:37.834019 containerd[1748]: time="2025-07-12T00:09:37.833976622Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:37.840201 containerd[1748]: time="2025-07-12T00:09:37.839414896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:37.840201 containerd[1748]: time="2025-07-12T00:09:37.840060135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.4775261s" Jul 12 00:09:37.840201 containerd[1748]: time="2025-07-12T00:09:37.840090935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:09:37.844612 containerd[1748]: time="2025-07-12T00:09:37.844578530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:37.854904 systemd[1]: Started cri-containerd-cac7d7a084ec2b5aaca633a8c09a12bdbb20b4d6e6b78220c48fe87f50d7f42f.scope - libcontainer container cac7d7a084ec2b5aaca633a8c09a12bdbb20b4d6e6b78220c48fe87f50d7f42f. Jul 12 00:09:37.865341 containerd[1748]: time="2025-07-12T00:09:37.865226466Z" level=info msg="CreateContainer within sandbox \"f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:09:37.866506 systemd-networkd[1562]: cali7201d1eddc8: Gained IPv6LL Jul 12 00:09:37.892556 containerd[1748]: time="2025-07-12T00:09:37.892512715Z" level=info msg="StartContainer for \"cac7d7a084ec2b5aaca633a8c09a12bdbb20b4d6e6b78220c48fe87f50d7f42f\" returns successfully" Jul 12 00:09:37.930177 containerd[1748]: time="2025-07-12T00:09:37.930029233Z" level=info msg="CreateContainer within sandbox \"f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"711629c4fc57817b644db027a4c4ba677150a8eb7f999193ae47ac853429ab73\"" Jul 12 00:09:37.931140 containerd[1748]: time="2025-07-12T00:09:37.930881992Z" level=info msg="StartContainer for \"711629c4fc57817b644db027a4c4ba677150a8eb7f999193ae47ac853429ab73\"" Jul 12 00:09:37.959423 systemd[1]: Started cri-containerd-711629c4fc57817b644db027a4c4ba677150a8eb7f999193ae47ac853429ab73.scope - libcontainer container 711629c4fc57817b644db027a4c4ba677150a8eb7f999193ae47ac853429ab73. Jul 12 00:09:37.994646 containerd[1748]: time="2025-07-12T00:09:37.994467119Z" level=info msg="StartContainer for \"711629c4fc57817b644db027a4c4ba677150a8eb7f999193ae47ac853429ab73\" returns successfully" Jul 12 00:09:38.279115 kubelet[3127]: I0712 00:09:38.279050 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jvx6w" podStartSLOduration=47.279033795 podStartE2EDuration="47.279033795s" podCreationTimestamp="2025-07-12 00:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:09:38.235356205 +0000 UTC m=+53.406066341" watchObservedRunningTime="2025-07-12 00:09:38.279033795 +0000 UTC m=+53.449743931" Jul 12 00:09:38.318277 kubelet[3127]: I0712 00:09:38.317430 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64fd8cd9c6-dfxmq" podStartSLOduration=23.836131535 podStartE2EDuration="28.317410431s" podCreationTimestamp="2025-07-12 00:09:10 +0000 UTC" firstStartedPulling="2025-07-12 00:09:33.359684318 +0000 UTC m=+48.530394454" lastFinishedPulling="2025-07-12 00:09:37.840963254 +0000 UTC m=+53.011673350" observedRunningTime="2025-07-12 00:09:38.30062237 +0000 UTC m=+53.471332506" watchObservedRunningTime="2025-07-12 00:09:38.317410431 +0000 UTC m=+53.488120567" Jul 12 00:09:38.506419 systemd-networkd[1562]: cali1a4c3c4da62: Gained IPv6LL Jul 12 00:09:38.634457 systemd-networkd[1562]: califfb9fbe2dcc: Gained IPv6LL Jul 12 00:09:38.698458 systemd-networkd[1562]: calie17d4eee484: Gained IPv6LL Jul 12 00:09:40.681351 containerd[1748]: time="2025-07-12T00:09:40.681298894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:40.691132 containerd[1748]: time="2025-07-12T00:09:40.690932723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:09:40.697570 containerd[1748]: time="2025-07-12T00:09:40.697513076Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:40.711166 containerd[1748]: time="2025-07-12T00:09:40.711119380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:40.712217 containerd[1748]: time="2025-07-12T00:09:40.712062579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.867311329s" Jul 12 00:09:40.712217 containerd[1748]: time="2025-07-12T00:09:40.712096659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:40.714805 containerd[1748]: time="2025-07-12T00:09:40.713834977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:40.720948 containerd[1748]: time="2025-07-12T00:09:40.720808849Z" level=info msg="CreateContainer within sandbox \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:40.765676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737945956.mount: Deactivated successfully. Jul 12 00:09:40.782960 containerd[1748]: time="2025-07-12T00:09:40.782909618Z" level=info msg="CreateContainer within sandbox \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\"" Jul 12 00:09:40.783519 containerd[1748]: time="2025-07-12T00:09:40.783488057Z" level=info msg="StartContainer for \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\"" Jul 12 00:09:40.843449 systemd[1]: Started cri-containerd-b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252.scope - libcontainer container b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252. Jul 12 00:09:40.879683 containerd[1748]: time="2025-07-12T00:09:40.879636507Z" level=info msg="StartContainer for \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\" returns successfully" Jul 12 00:09:40.887540 kubelet[3127]: I0712 00:09:40.887225 3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:41.118797 containerd[1748]: time="2025-07-12T00:09:41.118743074Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:41.124423 containerd[1748]: time="2025-07-12T00:09:41.124382627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:09:41.127575 containerd[1748]: time="2025-07-12T00:09:41.127536944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 413.661207ms" Jul 12 00:09:41.127676 containerd[1748]: time="2025-07-12T00:09:41.127601144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:41.130145 containerd[1748]: time="2025-07-12T00:09:41.130110821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:09:41.141177 containerd[1748]: time="2025-07-12T00:09:41.141136928Z" level=info msg="CreateContainer within sandbox \"cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:41.220576 containerd[1748]: time="2025-07-12T00:09:41.220524117Z" level=info msg="CreateContainer within sandbox \"cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59c440c4fa063b32d99feb657d7c833de52d8248f188f8797037fc87061571b4\"" Jul 12 00:09:41.222358 containerd[1748]: time="2025-07-12T00:09:41.221406396Z" level=info msg="StartContainer for \"59c440c4fa063b32d99feb657d7c833de52d8248f188f8797037fc87061571b4\"" Jul 12 00:09:41.281330 systemd[1]: Started cri-containerd-59c440c4fa063b32d99feb657d7c833de52d8248f188f8797037fc87061571b4.scope - libcontainer container 59c440c4fa063b32d99feb657d7c833de52d8248f188f8797037fc87061571b4. Jul 12 00:09:41.330006 containerd[1748]: time="2025-07-12T00:09:41.329750592Z" level=info msg="StartContainer for \"59c440c4fa063b32d99feb657d7c833de52d8248f188f8797037fc87061571b4\" returns successfully" Jul 12 00:09:42.281407 kubelet[3127]: I0712 00:09:42.281206 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f66ccc75-c2rbd" podStartSLOduration=28.435764827 podStartE2EDuration="35.281186544s" podCreationTimestamp="2025-07-12 00:09:07 +0000 UTC" firstStartedPulling="2025-07-12 00:09:34.283701665 +0000 UTC m=+49.454411801" lastFinishedPulling="2025-07-12 00:09:41.129123382 +0000 UTC m=+56.299833518" observedRunningTime="2025-07-12 00:09:42.279355586 +0000 UTC m=+57.450065722" watchObservedRunningTime="2025-07-12 00:09:42.281186544 +0000 UTC m=+57.451896720" Jul 12 00:09:42.281797 kubelet[3127]: I0712 00:09:42.281457 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fc958d55f-sgjtg" podStartSLOduration=30.014486905 podStartE2EDuration="37.281449864s" podCreationTimestamp="2025-07-12 00:09:05 +0000 UTC" firstStartedPulling="2025-07-12 00:09:33.446178459 +0000 UTC m=+48.616888595" lastFinishedPulling="2025-07-12 00:09:40.713141458 +0000 UTC m=+55.883851554" observedRunningTime="2025-07-12 00:09:41.267052344 +0000 UTC m=+56.437762480" watchObservedRunningTime="2025-07-12 00:09:42.281449864 +0000 UTC m=+57.452159960" Jul 12 00:09:42.482383 containerd[1748]: time="2025-07-12T00:09:42.482290594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:42.487429 containerd[1748]: time="2025-07-12T00:09:42.487384548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:09:42.501997 containerd[1748]: time="2025-07-12T00:09:42.501819132Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:42.509942 containerd[1748]: time="2025-07-12T00:09:42.509106083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:42.509942 containerd[1748]: time="2025-07-12T00:09:42.509819922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.379674741s" Jul 12 00:09:42.509942 containerd[1748]: time="2025-07-12T00:09:42.509850122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:09:42.513057 containerd[1748]: time="2025-07-12T00:09:42.512863439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:09:42.520970 containerd[1748]: time="2025-07-12T00:09:42.520935790Z" level=info msg="CreateContainer within sandbox \"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:09:42.586301 containerd[1748]: time="2025-07-12T00:09:42.586175155Z" level=info msg="CreateContainer within sandbox \"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bbbe9cb87da985306b9c0eba3d8c1d4b18e599a1521ae9a29f8c27f4b0a4f361\"" Jul 12 00:09:42.587826 containerd[1748]: time="2025-07-12T00:09:42.587792593Z" level=info msg="StartContainer for \"bbbe9cb87da985306b9c0eba3d8c1d4b18e599a1521ae9a29f8c27f4b0a4f361\"" Jul 12 00:09:42.634951 systemd[1]: run-containerd-runc-k8s.io-bbbe9cb87da985306b9c0eba3d8c1d4b18e599a1521ae9a29f8c27f4b0a4f361-runc.SzrSLA.mount: Deactivated successfully. Jul 12 00:09:42.643553 systemd[1]: Started cri-containerd-bbbe9cb87da985306b9c0eba3d8c1d4b18e599a1521ae9a29f8c27f4b0a4f361.scope - libcontainer container bbbe9cb87da985306b9c0eba3d8c1d4b18e599a1521ae9a29f8c27f4b0a4f361. Jul 12 00:09:42.716770 containerd[1748]: time="2025-07-12T00:09:42.716625086Z" level=info msg="StartContainer for \"bbbe9cb87da985306b9c0eba3d8c1d4b18e599a1521ae9a29f8c27f4b0a4f361\" returns successfully" Jul 12 00:09:43.264973 kubelet[3127]: I0712 00:09:43.264732 3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:44.954220 containerd[1748]: time="2025-07-12T00:09:44.954178286Z" level=info msg="StopPodSandbox for \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\"" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:44.992 [WARNING][6024] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0", GenerateName:"calico-kube-controllers-64fd8cd9c6-", Namespace:"calico-system", SelfLink:"", UID:"33b79413-179b-4bb3-828a-0fadd7c84383", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd8cd9c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8", Pod:"calico-kube-controllers-64fd8cd9c6-dfxmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36c2931254e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:44.992 [INFO][6024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:44.992 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" iface="eth0" netns="" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:44.992 [INFO][6024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:44.992 [INFO][6024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:45.027 [INFO][6032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:45.027 [INFO][6032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:45.027 [INFO][6032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:45.036 [WARNING][6032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:45.036 [INFO][6032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:45.037 [INFO][6032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.041035 containerd[1748]: 2025-07-12 00:09:45.039 [INFO][6024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.041685 containerd[1748]: time="2025-07-12T00:09:45.041075227Z" level=info msg="TearDown network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\" successfully" Jul 12 00:09:45.041685 containerd[1748]: time="2025-07-12T00:09:45.041099467Z" level=info msg="StopPodSandbox for \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\" returns successfully" Jul 12 00:09:45.041773 containerd[1748]: time="2025-07-12T00:09:45.041737586Z" level=info msg="RemovePodSandbox for \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\"" Jul 12 00:09:45.041808 containerd[1748]: time="2025-07-12T00:09:45.041774946Z" level=info msg="Forcibly stopping sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\"" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.076 [WARNING][6047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0", GenerateName:"calico-kube-controllers-64fd8cd9c6-", Namespace:"calico-system", SelfLink:"", UID:"33b79413-179b-4bb3-828a-0fadd7c84383", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fd8cd9c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"f80ede8a92f89ebf347ff22d0a19e138e4102d89dd7fbcbfa129ed2a6c4129b8", Pod:"calico-kube-controllers-64fd8cd9c6-dfxmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36c2931254e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.076 [INFO][6047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.076 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" iface="eth0" netns="" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.076 [INFO][6047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.076 [INFO][6047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.097 [INFO][6054] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.098 [INFO][6054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.098 [INFO][6054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.107 [WARNING][6054] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.107 [INFO][6054] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" HandleID="k8s-pod-network.d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--kube--controllers--64fd8cd9c6--dfxmq-eth0" Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.108 [INFO][6054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.111621 containerd[1748]: 2025-07-12 00:09:45.110 [INFO][6047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5" Jul 12 00:09:45.111621 containerd[1748]: time="2025-07-12T00:09:45.111690546Z" level=info msg="TearDown network for sandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\" successfully" Jul 12 00:09:45.133861 containerd[1748]: time="2025-07-12T00:09:45.133276441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:45.133861 containerd[1748]: time="2025-07-12T00:09:45.133390481Z" level=info msg="RemovePodSandbox \"d992a33e67219b4030ddaab5af74ed55fd96eab6c58113cdd0cd9b2b02e1b0b5\" returns successfully" Jul 12 00:09:45.134463 containerd[1748]: time="2025-07-12T00:09:45.134126000Z" level=info msg="StopPodSandbox for \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\"" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.190 [WARNING][6068] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0465de75-2781-421a-b1c8-807d08b402b9", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0", Pod:"csi-node-driver-26mjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61478a4999b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.191 [INFO][6068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.191 [INFO][6068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" iface="eth0" netns="" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.191 [INFO][6068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.191 [INFO][6068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.217 [INFO][6081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.217 [INFO][6081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.217 [INFO][6081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.228 [WARNING][6081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.228 [INFO][6081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.229 [INFO][6081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.233221 containerd[1748]: 2025-07-12 00:09:45.231 [INFO][6068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.233221 containerd[1748]: time="2025-07-12T00:09:45.232955687Z" level=info msg="TearDown network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\" successfully" Jul 12 00:09:45.233221 containerd[1748]: time="2025-07-12T00:09:45.232979407Z" level=info msg="StopPodSandbox for \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\" returns successfully" Jul 12 00:09:45.234431 containerd[1748]: time="2025-07-12T00:09:45.233698046Z" level=info msg="RemovePodSandbox for \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\"" Jul 12 00:09:45.234431 containerd[1748]: time="2025-07-12T00:09:45.233727046Z" level=info msg="Forcibly stopping sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\"" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.268 [WARNING][6095] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0465de75-2781-421a-b1c8-807d08b402b9", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0", Pod:"csi-node-driver-26mjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61478a4999b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.269 [INFO][6095] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.269 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" iface="eth0" netns="" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.269 [INFO][6095] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.269 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.290 [INFO][6102] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.290 [INFO][6102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.290 [INFO][6102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.299 [WARNING][6102] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.299 [INFO][6102] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" HandleID="k8s-pod-network.5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Workload="ci--4081.3.4--n--047a586f92-k8s-csi--node--driver--26mjl-eth0" Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.301 [INFO][6102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.304017 containerd[1748]: 2025-07-12 00:09:45.302 [INFO][6095] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af" Jul 12 00:09:45.304522 containerd[1748]: time="2025-07-12T00:09:45.304066446Z" level=info msg="TearDown network for sandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\" successfully" Jul 12 00:09:45.316692 containerd[1748]: time="2025-07-12T00:09:45.316618032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:45.316807 containerd[1748]: time="2025-07-12T00:09:45.316710511Z" level=info msg="RemovePodSandbox \"5612a27b935e302146eee8d95ede94c114f7043186195d065f826984842c42af\" returns successfully" Jul 12 00:09:45.317356 containerd[1748]: time="2025-07-12T00:09:45.317333871Z" level=info msg="StopPodSandbox for \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\"" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.351 [WARNING][6116] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0", GenerateName:"calico-apiserver-79f66ccc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f66ccc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac", Pod:"calico-apiserver-79f66ccc75-c2rbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45fb9f88c66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.352 [INFO][6116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.352 [INFO][6116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" iface="eth0" netns="" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.352 [INFO][6116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.352 [INFO][6116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.371 [INFO][6123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.371 [INFO][6123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.371 [INFO][6123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.380 [WARNING][6123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.380 [INFO][6123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.381 [INFO][6123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.384360 containerd[1748]: 2025-07-12 00:09:45.382 [INFO][6116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.384360 containerd[1748]: time="2025-07-12T00:09:45.384240714Z" level=info msg="TearDown network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\" successfully" Jul 12 00:09:45.384360 containerd[1748]: time="2025-07-12T00:09:45.384287114Z" level=info msg="StopPodSandbox for \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\" returns successfully" Jul 12 00:09:45.384809 containerd[1748]: time="2025-07-12T00:09:45.384726714Z" level=info msg="RemovePodSandbox for \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\"" Jul 12 00:09:45.384809 containerd[1748]: time="2025-07-12T00:09:45.384758794Z" level=info msg="Forcibly stopping sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\"" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.417 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0", GenerateName:"calico-apiserver-79f66ccc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"c64c54d7-e2ea-42e7-9d83-27f006d7ff1f", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f66ccc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"cb95d0b7138ddaf103db71f0d9d80199dcf1f41091d2cf62b151d6ed7f6d4bac", Pod:"calico-apiserver-79f66ccc75-c2rbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45fb9f88c66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.418 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.418 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" iface="eth0" netns="" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.418 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.418 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.437 [INFO][6146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.437 [INFO][6146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.437 [INFO][6146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.447 [WARNING][6146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.447 [INFO][6146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" HandleID="k8s-pod-network.c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--c2rbd-eth0" Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.448 [INFO][6146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.453290 containerd[1748]: 2025-07-12 00:09:45.451 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045" Jul 12 00:09:45.453290 containerd[1748]: time="2025-07-12T00:09:45.453085715Z" level=info msg="TearDown network for sandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\" successfully" Jul 12 00:09:45.468335 containerd[1748]: time="2025-07-12T00:09:45.468297978Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:45.468457 containerd[1748]: time="2025-07-12T00:09:45.468370498Z" level=info msg="RemovePodSandbox \"c5cfb224f87c9cf50c9a752b2f0e4d65cfcbc1e6c01060217fd2dac8a1ea9045\" returns successfully" Jul 12 00:09:45.469073 containerd[1748]: time="2025-07-12T00:09:45.468800097Z" level=info msg="StopPodSandbox for \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\"" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.500 [WARNING][6160] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"664d0b79-40b5-4d9c-8498-5a2e2d35a983", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6", Pod:"coredns-674b8bbfcf-6lhhr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7201d1eddc8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.501 [INFO][6160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.501 [INFO][6160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" iface="eth0" netns="" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.501 [INFO][6160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.501 [INFO][6160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.520 [INFO][6167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.520 [INFO][6167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.520 [INFO][6167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.529 [WARNING][6167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.529 [INFO][6167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.531 [INFO][6167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.533917 containerd[1748]: 2025-07-12 00:09:45.532 [INFO][6160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.534547 containerd[1748]: time="2025-07-12T00:09:45.534424942Z" level=info msg="TearDown network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\" successfully" Jul 12 00:09:45.534547 containerd[1748]: time="2025-07-12T00:09:45.534455662Z" level=info msg="StopPodSandbox for \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\" returns successfully" Jul 12 00:09:45.534989 containerd[1748]: time="2025-07-12T00:09:45.534966222Z" level=info msg="RemovePodSandbox for \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\"" Jul 12 00:09:45.535399 containerd[1748]: time="2025-07-12T00:09:45.535076942Z" level=info msg="Forcibly stopping sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\"" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.568 [WARNING][6181] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"664d0b79-40b5-4d9c-8498-5a2e2d35a983", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"762f6eb6d9fd8596cc557ea4ae280a10fc807e9800271855850d53d793a931a6", Pod:"coredns-674b8bbfcf-6lhhr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7201d1eddc8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.568 [INFO][6181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.568 [INFO][6181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" iface="eth0" netns="" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.568 [INFO][6181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.568 [INFO][6181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.588 [INFO][6188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.589 [INFO][6188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.589 [INFO][6188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.597 [WARNING][6188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.597 [INFO][6188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" HandleID="k8s-pod-network.9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--6lhhr-eth0" Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.598 [INFO][6188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.601305 containerd[1748]: 2025-07-12 00:09:45.600 [INFO][6181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2" Jul 12 00:09:45.601871 containerd[1748]: time="2025-07-12T00:09:45.601345186Z" level=info msg="TearDown network for sandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\" successfully" Jul 12 00:09:45.618134 containerd[1748]: time="2025-07-12T00:09:45.618072167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:45.618342 containerd[1748]: time="2025-07-12T00:09:45.618158047Z" level=info msg="RemovePodSandbox \"9191281bc66bff9e50a15a8e911b0b8c1d9dd5735f7ef25b74bf31f5afd702b2\" returns successfully" Jul 12 00:09:45.618870 containerd[1748]: time="2025-07-12T00:09:45.618759486Z" level=info msg="StopPodSandbox for \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\"" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.654 [WARNING][6202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe", Pod:"coredns-674b8bbfcf-jvx6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfb9fbe2dcc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.655 [INFO][6202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.655 [INFO][6202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" iface="eth0" netns="" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.655 [INFO][6202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.655 [INFO][6202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.678 [INFO][6209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.678 [INFO][6209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.678 [INFO][6209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.699 [WARNING][6209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.699 [INFO][6209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.706 [INFO][6209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.722191 containerd[1748]: 2025-07-12 00:09:45.711 [INFO][6202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.722191 containerd[1748]: time="2025-07-12T00:09:45.722068328Z" level=info msg="TearDown network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\" successfully" Jul 12 00:09:45.722191 containerd[1748]: time="2025-07-12T00:09:45.722092368Z" level=info msg="StopPodSandbox for \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\" returns successfully" Jul 12 00:09:45.728380 containerd[1748]: time="2025-07-12T00:09:45.725413684Z" level=info msg="RemovePodSandbox for \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\"" Jul 12 00:09:45.728380 containerd[1748]: time="2025-07-12T00:09:45.725451964Z" level=info msg="Forcibly stopping sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\"" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.830 [WARNING][6225] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2c55f8f7-b85b-4e3f-974e-90f3a29f93c9", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"9644b1419e1772481fea3fb0f6b6c58244dab7b76e5eff6c36e4e9d538288dbe", Pod:"coredns-674b8bbfcf-jvx6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califfb9fbe2dcc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.830 [INFO][6225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.830 [INFO][6225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" iface="eth0" netns="" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.830 [INFO][6225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.830 [INFO][6225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.850 [INFO][6232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.851 [INFO][6232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.851 [INFO][6232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.861 [WARNING][6232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.861 [INFO][6232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" HandleID="k8s-pod-network.d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Workload="ci--4081.3.4--n--047a586f92-k8s-coredns--674b8bbfcf--jvx6w-eth0" Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.862 [INFO][6232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.865207 containerd[1748]: 2025-07-12 00:09:45.863 [INFO][6225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485" Jul 12 00:09:45.866038 containerd[1748]: time="2025-07-12T00:09:45.865858563Z" level=info msg="TearDown network for sandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\" successfully" Jul 12 00:09:46.116736 containerd[1748]: time="2025-07-12T00:09:46.116617356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:46.117131 containerd[1748]: time="2025-07-12T00:09:46.116929516Z" level=info msg="RemovePodSandbox \"d7299c21b81604bb215e68c6da89b1698810e723dd308e0de41c1e4f6de00485\" returns successfully" Jul 12 00:09:46.118391 containerd[1748]: time="2025-07-12T00:09:46.118360234Z" level=info msg="StopPodSandbox for \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\"" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.179 [WARNING][6250] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.181 [INFO][6250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.181 [INFO][6250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" iface="eth0" netns="" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.181 [INFO][6250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.181 [INFO][6250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.205 [INFO][6258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.205 [INFO][6258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.205 [INFO][6258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.217 [WARNING][6258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.218 [INFO][6258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.221 [INFO][6258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:46.224369 containerd[1748]: 2025-07-12 00:09:46.223 [INFO][6250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.225145 containerd[1748]: time="2025-07-12T00:09:46.224410353Z" level=info msg="TearDown network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\" successfully" Jul 12 00:09:46.225145 containerd[1748]: time="2025-07-12T00:09:46.224436953Z" level=info msg="StopPodSandbox for \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\" returns successfully" Jul 12 00:09:46.225622 containerd[1748]: time="2025-07-12T00:09:46.225575632Z" level=info msg="RemovePodSandbox for \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\"" Jul 12 00:09:46.225622 containerd[1748]: time="2025-07-12T00:09:46.225617432Z" level=info msg="Forcibly stopping sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\"" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.270 [WARNING][6272] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.271 [INFO][6272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.271 [INFO][6272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" iface="eth0" netns="" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.271 [INFO][6272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.271 [INFO][6272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.303 [INFO][6280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.304 [INFO][6280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.304 [INFO][6280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.319 [WARNING][6280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.319 [INFO][6280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" HandleID="k8s-pod-network.c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Workload="ci--4081.3.4--n--047a586f92-k8s-whisker--758f9f458--7vxfp-eth0" Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.321 [INFO][6280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:46.327308 containerd[1748]: 2025-07-12 00:09:46.323 [INFO][6272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2" Jul 12 00:09:46.328207 containerd[1748]: time="2025-07-12T00:09:46.327222435Z" level=info msg="TearDown network for sandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\" successfully" Jul 12 00:09:46.341836 containerd[1748]: time="2025-07-12T00:09:46.341640939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:46.341836 containerd[1748]: time="2025-07-12T00:09:46.341715099Z" level=info msg="RemovePodSandbox \"c868d401ce93f39bac4d29d70aa6909bc9b6b38d92fea08c13e68e831bf450e2\" returns successfully" Jul 12 00:09:46.342592 containerd[1748]: time="2025-07-12T00:09:46.342297938Z" level=info msg="StopPodSandbox for \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\"" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.440 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569", Pod:"calico-apiserver-7fc958d55f-sgjtg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidefe61e03cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.440 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.440 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" iface="eth0" netns="" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.440 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.440 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.477 [INFO][6302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.477 [INFO][6302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.477 [INFO][6302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.488 [WARNING][6302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.488 [INFO][6302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.491 [INFO][6302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:46.499604 containerd[1748]: 2025-07-12 00:09:46.495 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.500986 containerd[1748]: time="2025-07-12T00:09:46.499780238Z" level=info msg="TearDown network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\" successfully" Jul 12 00:09:46.500986 containerd[1748]: time="2025-07-12T00:09:46.500656877Z" level=info msg="StopPodSandbox for \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\" returns successfully" Jul 12 00:09:46.502432 containerd[1748]: time="2025-07-12T00:09:46.502294835Z" level=info msg="RemovePodSandbox for \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\"" Jul 12 00:09:46.502604 containerd[1748]: time="2025-07-12T00:09:46.502546035Z" level=info msg="Forcibly stopping sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\"" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.601 [WARNING][6316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569", Pod:"calico-apiserver-7fc958d55f-sgjtg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidefe61e03cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.602 [INFO][6316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.602 [INFO][6316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" iface="eth0" netns="" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.602 [INFO][6316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.602 [INFO][6316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.658 [INFO][6323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.660 [INFO][6323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.660 [INFO][6323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.678 [WARNING][6323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.678 [INFO][6323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" HandleID="k8s-pod-network.00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.680 [INFO][6323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:46.688402 containerd[1748]: 2025-07-12 00:09:46.684 [INFO][6316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417" Jul 12 00:09:46.689350 containerd[1748]: time="2025-07-12T00:09:46.688846502Z" level=info msg="TearDown network for sandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\" successfully" Jul 12 00:09:46.710090 containerd[1748]: time="2025-07-12T00:09:46.709798198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:46.710090 containerd[1748]: time="2025-07-12T00:09:46.709882118Z" level=info msg="RemovePodSandbox \"00bd5a34d1e00147766d121f396f6ab204b17ccc9520d5c38884736c8e921417\" returns successfully" Jul 12 00:09:46.710994 containerd[1748]: time="2025-07-12T00:09:46.710732637Z" level=info msg="StopPodSandbox for \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\"" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.811 [WARNING][6337] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd421362-01fe-4241-bb04-72d4085cf927", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d", Pod:"goldmane-768f4c5c69-4sphm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie17d4eee484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.813 [INFO][6337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.813 [INFO][6337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" iface="eth0" netns="" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.813 [INFO][6337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.813 [INFO][6337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.867 [INFO][6347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.867 [INFO][6347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.867 [INFO][6347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.878 [WARNING][6347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.878 [INFO][6347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.880 [INFO][6347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:46.885891 containerd[1748]: 2025-07-12 00:09:46.882 [INFO][6337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:46.887547 containerd[1748]: time="2025-07-12T00:09:46.886834355Z" level=info msg="TearDown network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\" successfully" Jul 12 00:09:46.887547 containerd[1748]: time="2025-07-12T00:09:46.887045235Z" level=info msg="StopPodSandbox for \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\" returns successfully" Jul 12 00:09:46.888305 containerd[1748]: time="2025-07-12T00:09:46.888100674Z" level=info msg="RemovePodSandbox for \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\"" Jul 12 00:09:46.888305 containerd[1748]: time="2025-07-12T00:09:46.888138674Z" level=info msg="Forcibly stopping sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\"" Jul 12 00:09:47.001122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519352517.mount: Deactivated successfully. Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.004 [WARNING][6365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd421362-01fe-4241-bb04-72d4085cf927", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d", Pod:"goldmane-768f4c5c69-4sphm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie17d4eee484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.004 [INFO][6365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.004 [INFO][6365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" iface="eth0" netns="" Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.004 [INFO][6365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.004 [INFO][6365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.028 [INFO][6373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.028 [INFO][6373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.028 [INFO][6373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.048 [WARNING][6373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.048 [INFO][6373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" HandleID="k8s-pod-network.a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Workload="ci--4081.3.4--n--047a586f92-k8s-goldmane--768f4c5c69--4sphm-eth0" Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.050 [INFO][6373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:47.053997 containerd[1748]: 2025-07-12 00:09:47.052 [INFO][6365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2" Jul 12 00:09:47.053997 containerd[1748]: time="2025-07-12T00:09:47.053803884Z" level=info msg="TearDown network for sandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\" successfully" Jul 12 00:09:47.085433 containerd[1748]: time="2025-07-12T00:09:47.085379648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:47.085554 containerd[1748]: time="2025-07-12T00:09:47.085472208Z" level=info msg="RemovePodSandbox \"a0c3d02b20fe776b30fde061f16e4cb46e47fd84ecc18012320d8ed1fdf8e6f2\" returns successfully" Jul 12 00:09:47.086274 containerd[1748]: time="2025-07-12T00:09:47.085949447Z" level=info msg="StopPodSandbox for \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\"" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.151 [WARNING][6391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7", Pod:"calico-apiserver-7fc958d55f-s85hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a4c3c4da62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.151 [INFO][6391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.151 [INFO][6391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" iface="eth0" netns="" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.151 [INFO][6391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.151 [INFO][6391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.192 [INFO][6398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.193 [INFO][6398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.193 [INFO][6398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.206 [WARNING][6398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.206 [INFO][6398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.209 [INFO][6398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:47.216181 containerd[1748]: 2025-07-12 00:09:47.211 [INFO][6391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.217278 containerd[1748]: time="2025-07-12T00:09:47.217229697Z" level=info msg="TearDown network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\" successfully" Jul 12 00:09:47.218712 containerd[1748]: time="2025-07-12T00:09:47.217283177Z" level=info msg="StopPodSandbox for \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\" returns successfully" Jul 12 00:09:47.219871 containerd[1748]: time="2025-07-12T00:09:47.219514815Z" level=info msg="RemovePodSandbox for \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\"" Jul 12 00:09:47.219871 containerd[1748]: time="2025-07-12T00:09:47.219551575Z" level=info msg="Forcibly stopping sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\"" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.322 [WARNING][6412] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0", GenerateName:"calico-apiserver-7fc958d55f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc958d55f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7", Pod:"calico-apiserver-7fc958d55f-s85hw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a4c3c4da62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.323 [INFO][6412] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.323 [INFO][6412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" iface="eth0" netns="" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.323 [INFO][6412] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.323 [INFO][6412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.370 [INFO][6419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.370 [INFO][6419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.370 [INFO][6419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.409 [WARNING][6419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.409 [INFO][6419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" HandleID="k8s-pod-network.cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.413 [INFO][6419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:47.423429 containerd[1748]: 2025-07-12 00:09:47.419 [INFO][6412] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace" Jul 12 00:09:47.426299 containerd[1748]: time="2025-07-12T00:09:47.424021461Z" level=info msg="TearDown network for sandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\" successfully" Jul 12 00:09:47.443543 containerd[1748]: time="2025-07-12T00:09:47.443495078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:47.444101 containerd[1748]: time="2025-07-12T00:09:47.443897318Z" level=info msg="RemovePodSandbox \"cc4a27ab0ccdba836b525e1f9d1c2ca82e3e40b0a8bcd57489f1a9e8b310bace\" returns successfully" Jul 12 00:09:48.018608 containerd[1748]: time="2025-07-12T00:09:48.018536497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:48.022412 containerd[1748]: time="2025-07-12T00:09:48.022348173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:09:48.032396 containerd[1748]: time="2025-07-12T00:09:48.032335761Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:48.038934 containerd[1748]: time="2025-07-12T00:09:48.038826714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:48.040104 containerd[1748]: time="2025-07-12T00:09:48.039951432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 5.527027073s" Jul 12 00:09:48.040104 containerd[1748]: time="2025-07-12T00:09:48.040027632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:09:48.043727 containerd[1748]: time="2025-07-12T00:09:48.043583828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:48.051683 containerd[1748]: time="2025-07-12T00:09:48.051502419Z" level=info msg="CreateContainer within sandbox \"a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:09:48.112805 containerd[1748]: time="2025-07-12T00:09:48.112670308Z" level=info msg="CreateContainer within sandbox \"a97b6155af212f8406b775033e5202d64ddec016993b775e2f33b09753509e9d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"48264455f2a19e0ac140bb0435e885ba62682d640b90640771c3b6ac579b29d6\"" Jul 12 00:09:48.114287 containerd[1748]: time="2025-07-12T00:09:48.113873107Z" level=info msg="StartContainer for \"48264455f2a19e0ac140bb0435e885ba62682d640b90640771c3b6ac579b29d6\"" Jul 12 00:09:48.159838 systemd[1]: run-containerd-runc-k8s.io-48264455f2a19e0ac140bb0435e885ba62682d640b90640771c3b6ac579b29d6-runc.q59kkW.mount: Deactivated successfully. Jul 12 00:09:48.170445 systemd[1]: Started cri-containerd-48264455f2a19e0ac140bb0435e885ba62682d640b90640771c3b6ac579b29d6.scope - libcontainer container 48264455f2a19e0ac140bb0435e885ba62682d640b90640771c3b6ac579b29d6. Jul 12 00:09:48.240968 containerd[1748]: time="2025-07-12T00:09:48.240847800Z" level=info msg="StartContainer for \"48264455f2a19e0ac140bb0435e885ba62682d640b90640771c3b6ac579b29d6\" returns successfully" Jul 12 00:09:48.345944 kubelet[3127]: I0712 00:09:48.345802 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-4sphm" podStartSLOduration=27.213848362 podStartE2EDuration="38.345787078s" podCreationTimestamp="2025-07-12 00:09:10 +0000 UTC" firstStartedPulling="2025-07-12 00:09:36.910002114 +0000 UTC m=+52.080712250" lastFinishedPulling="2025-07-12 00:09:48.04194083 +0000 UTC m=+63.212650966" observedRunningTime="2025-07-12 00:09:48.345385519 +0000 UTC m=+63.516095655" watchObservedRunningTime="2025-07-12 00:09:48.345787078 +0000 UTC m=+63.516497214" Jul 12 00:09:48.426277 containerd[1748]: time="2025-07-12T00:09:48.425978585Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:48.433232 containerd[1748]: time="2025-07-12T00:09:48.433182697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:09:48.436435 containerd[1748]: time="2025-07-12T00:09:48.436399733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 392.766185ms" Jul 12 00:09:48.436625 containerd[1748]: time="2025-07-12T00:09:48.436561213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:48.440641 containerd[1748]: time="2025-07-12T00:09:48.440446209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:09:48.447789 containerd[1748]: time="2025-07-12T00:09:48.447749840Z" level=info msg="CreateContainer within sandbox \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:48.495738 containerd[1748]: time="2025-07-12T00:09:48.495551185Z" level=info msg="CreateContainer within sandbox \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\"" Jul 12 00:09:48.498295 containerd[1748]: time="2025-07-12T00:09:48.496305024Z" level=info msg="StartContainer for \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\"" Jul 12 00:09:48.536477 systemd[1]: Started cri-containerd-0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041.scope - libcontainer container 0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041. Jul 12 00:09:48.616040 containerd[1748]: time="2025-07-12T00:09:48.615724006Z" level=info msg="StartContainer for \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\" returns successfully" Jul 12 00:09:49.876202 containerd[1748]: time="2025-07-12T00:09:49.876139547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:49.880957 containerd[1748]: time="2025-07-12T00:09:49.880925781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:09:49.886828 containerd[1748]: time="2025-07-12T00:09:49.886790575Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:49.892617 containerd[1748]: time="2025-07-12T00:09:49.892580048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:49.893755 containerd[1748]: time="2025-07-12T00:09:49.893250207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.452765678s" Jul 12 00:09:49.893810 containerd[1748]: time="2025-07-12T00:09:49.893759367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:09:49.905523 containerd[1748]: time="2025-07-12T00:09:49.905461833Z" level=info msg="CreateContainer within sandbox \"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:09:49.951953 containerd[1748]: time="2025-07-12T00:09:49.951892339Z" level=info msg="CreateContainer within sandbox \"b59abbe8b833b0853e277023384fd63a08e03a185c0fd609e3293f9c501b3eb0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"62aec8fb98ff004954ad6f84c5c5d7c1234f8d7af8e712d6676dc968ab18036c\"" Jul 12 00:09:49.953607 containerd[1748]: time="2025-07-12T00:09:49.953574337Z" level=info msg="StartContainer for \"62aec8fb98ff004954ad6f84c5c5d7c1234f8d7af8e712d6676dc968ab18036c\"" Jul 12 00:09:49.996444 systemd[1]: Started cri-containerd-62aec8fb98ff004954ad6f84c5c5d7c1234f8d7af8e712d6676dc968ab18036c.scope - libcontainer container 62aec8fb98ff004954ad6f84c5c5d7c1234f8d7af8e712d6676dc968ab18036c. Jul 12 00:09:50.045514 containerd[1748]: time="2025-07-12T00:09:50.045463791Z" level=info msg="StartContainer for \"62aec8fb98ff004954ad6f84c5c5d7c1234f8d7af8e712d6676dc968ab18036c\" returns successfully" Jul 12 00:09:50.320636 kubelet[3127]: I0712 00:09:50.320596 3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:50.344107 kubelet[3127]: I0712 00:09:50.342543 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fc958d55f-s85hw" podStartSLOduration=34.013272323 podStartE2EDuration="45.342514407s" podCreationTimestamp="2025-07-12 00:09:05 +0000 UTC" firstStartedPulling="2025-07-12 00:09:37.110364846 +0000 UTC m=+52.281074942" lastFinishedPulling="2025-07-12 00:09:48.43960689 +0000 UTC m=+63.610317026" observedRunningTime="2025-07-12 00:09:49.343921283 +0000 UTC m=+64.514631459" watchObservedRunningTime="2025-07-12 00:09:50.342514407 +0000 UTC m=+65.513224503" Jul 12 00:09:51.041013 kubelet[3127]: I0712 00:09:51.040961 3127 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:09:51.041013 kubelet[3127]: I0712 00:09:51.041008 3127 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:10:08.249902 systemd[1]: run-containerd-runc-k8s.io-711629c4fc57817b644db027a4c4ba677150a8eb7f999193ae47ac853429ab73-runc.HV6bGg.mount: Deactivated successfully. Jul 12 00:10:10.801854 kubelet[3127]: I0712 00:10:10.801787 3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:10:10.841136 kubelet[3127]: I0712 00:10:10.839908 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-26mjl" podStartSLOduration=46.252203387 podStartE2EDuration="1m0.839892132s" podCreationTimestamp="2025-07-12 00:09:10 +0000 UTC" firstStartedPulling="2025-07-12 00:09:35.307941459 +0000 UTC m=+50.478651595" lastFinishedPulling="2025-07-12 00:09:49.895630204 +0000 UTC m=+65.066340340" observedRunningTime="2025-07-12 00:09:50.347534721 +0000 UTC m=+65.518244897" watchObservedRunningTime="2025-07-12 00:10:10.839892132 +0000 UTC m=+86.010602228" Jul 12 00:10:13.883707 kubelet[3127]: I0712 00:10:13.883540 3127 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:10:14.013385 containerd[1748]: time="2025-07-12T00:10:14.013223831Z" level=info msg="StopContainer for \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\" with timeout 30 (s)" Jul 12 00:10:14.016864 containerd[1748]: time="2025-07-12T00:10:14.016156228Z" level=info msg="Stop container \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\" with signal terminated" Jul 12 00:10:14.089449 systemd[1]: cri-containerd-0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041.scope: Deactivated successfully. Jul 12 00:10:14.115036 systemd[1]: Created slice kubepods-besteffort-pod5527147c_8588_4cbd_9393_db30eb8e9873.slice - libcontainer container kubepods-besteffort-pod5527147c_8588_4cbd_9393_db30eb8e9873.slice. Jul 12 00:10:14.148420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041-rootfs.mount: Deactivated successfully. Jul 12 00:10:14.196600 containerd[1748]: time="2025-07-12T00:10:14.196508136Z" level=info msg="shim disconnected" id=0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041 namespace=k8s.io Jul 12 00:10:14.196600 containerd[1748]: time="2025-07-12T00:10:14.196590856Z" level=warning msg="cleaning up after shim disconnected" id=0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041 namespace=k8s.io Jul 12 00:10:14.196600 containerd[1748]: time="2025-07-12T00:10:14.196602136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:14.218892 containerd[1748]: time="2025-07-12T00:10:14.218839350Z" level=info msg="StopContainer for \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\" returns successfully" Jul 12 00:10:14.222301 kubelet[3127]: I0712 00:10:14.219432 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pgn\" (UniqueName: \"kubernetes.io/projected/5527147c-8588-4cbd-9393-db30eb8e9873-kube-api-access-z9pgn\") pod \"calico-apiserver-79f66ccc75-ppdlx\" (UID: \"5527147c-8588-4cbd-9393-db30eb8e9873\") " pod="calico-apiserver/calico-apiserver-79f66ccc75-ppdlx" Jul 12 00:10:14.222301 kubelet[3127]: I0712 00:10:14.219493 3127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5527147c-8588-4cbd-9393-db30eb8e9873-calico-apiserver-certs\") pod \"calico-apiserver-79f66ccc75-ppdlx\" (UID: \"5527147c-8588-4cbd-9393-db30eb8e9873\") " pod="calico-apiserver/calico-apiserver-79f66ccc75-ppdlx" Jul 12 00:10:14.222486 containerd[1748]: time="2025-07-12T00:10:14.219693909Z" level=info msg="StopPodSandbox for \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\"" Jul 12 00:10:14.222486 containerd[1748]: time="2025-07-12T00:10:14.219742549Z" level=info msg="Container to stop \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:10:14.223898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7-shm.mount: Deactivated successfully. Jul 12 00:10:14.232684 systemd[1]: cri-containerd-0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7.scope: Deactivated successfully. Jul 12 00:10:14.271658 containerd[1748]: time="2025-07-12T00:10:14.271567048Z" level=info msg="shim disconnected" id=0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7 namespace=k8s.io Jul 12 00:10:14.271658 containerd[1748]: time="2025-07-12T00:10:14.271645608Z" level=warning msg="cleaning up after shim disconnected" id=0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7 namespace=k8s.io Jul 12 00:10:14.271658 containerd[1748]: time="2025-07-12T00:10:14.271655128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:14.273869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7-rootfs.mount: Deactivated successfully. Jul 12 00:10:14.383059 kubelet[3127]: I0712 00:10:14.383016 3127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:14.414521 systemd-networkd[1562]: cali1a4c3c4da62: Link DOWN Jul 12 00:10:14.414529 systemd-networkd[1562]: cali1a4c3c4da62: Lost carrier Jul 12 00:10:14.422232 containerd[1748]: time="2025-07-12T00:10:14.421698392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f66ccc75-ppdlx,Uid:5527147c-8588-4cbd-9393-db30eb8e9873,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.412 [INFO][6785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.413 [INFO][6785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" iface="eth0" netns="/var/run/netns/cni-a8c3540b-481f-1995-e744-3f1f5ba1d03e" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.413 [INFO][6785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" iface="eth0" netns="/var/run/netns/cni-a8c3540b-481f-1995-e744-3f1f5ba1d03e" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.424 [INFO][6785] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" after=11.261987ms iface="eth0" netns="/var/run/netns/cni-a8c3540b-481f-1995-e744-3f1f5ba1d03e" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.424 [INFO][6785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.424 [INFO][6785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.456 [INFO][6796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.457 [INFO][6796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.457 [INFO][6796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.548 [INFO][6796] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.548 [INFO][6796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.550 [INFO][6796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:14.560910 containerd[1748]: 2025-07-12 00:10:14.557 [INFO][6785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:14.564825 containerd[1748]: time="2025-07-12T00:10:14.564586504Z" level=info msg="TearDown network for sandbox \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" successfully" Jul 12 00:10:14.564825 containerd[1748]: time="2025-07-12T00:10:14.564629704Z" level=info msg="StopPodSandbox for \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" returns successfully" Jul 12 00:10:14.696390 systemd-networkd[1562]: cali08d302669aa: Link UP Jul 12 00:10:14.696607 systemd-networkd[1562]: cali08d302669aa: Gained carrier Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.527 [INFO][6804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0 calico-apiserver-79f66ccc75- calico-apiserver 5527147c-8588-4cbd-9393-db30eb8e9873 1218 0 2025-07-12 00:10:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f66ccc75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-047a586f92 calico-apiserver-79f66ccc75-ppdlx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali08d302669aa [] [] }} ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.528 [INFO][6804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.577 [INFO][6819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" HandleID="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.578 [INFO][6819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" HandleID="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-047a586f92", "pod":"calico-apiserver-79f66ccc75-ppdlx", "timestamp":"2025-07-12 00:10:14.577524529 +0000 UTC"}, Hostname:"ci-4081.3.4-n-047a586f92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.578 [INFO][6819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.578 [INFO][6819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.578 [INFO][6819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-047a586f92' Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.605 [INFO][6819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.647 [INFO][6819] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.653 [INFO][6819] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.656 [INFO][6819] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.660 [INFO][6819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.660 [INFO][6819] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.663 [INFO][6819] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917 Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.673 [INFO][6819] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.689 [INFO][6819] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.202/26] block=192.168.35.192/26 handle="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.689 [INFO][6819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.202/26] handle="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" host="ci-4081.3.4-n-047a586f92" Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.689 [INFO][6819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:14.721460 containerd[1748]: 2025-07-12 00:10:14.689 [INFO][6819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.202/26] IPv6=[] ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" HandleID="k8s-pod-network.56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" Jul 12 00:10:14.722591 containerd[1748]: 2025-07-12 00:10:14.692 [INFO][6804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0", GenerateName:"calico-apiserver-79f66ccc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"5527147c-8588-4cbd-9393-db30eb8e9873", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f66ccc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"", Pod:"calico-apiserver-79f66ccc75-ppdlx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08d302669aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:10:14.722591 containerd[1748]: 2025-07-12 00:10:14.692 [INFO][6804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.202/32] ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" Jul 12 00:10:14.722591 containerd[1748]: 2025-07-12 00:10:14.692 [INFO][6804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08d302669aa ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" Jul 12 00:10:14.722591 containerd[1748]: 2025-07-12 00:10:14.695 [INFO][6804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" Jul 12 00:10:14.722591 containerd[1748]: 2025-07-12 00:10:14.695 [INFO][6804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0", GenerateName:"calico-apiserver-79f66ccc75-", Namespace:"calico-apiserver", SelfLink:"", UID:"5527147c-8588-4cbd-9393-db30eb8e9873", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 10, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f66ccc75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-047a586f92", ContainerID:"56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917", Pod:"calico-apiserver-79f66ccc75-ppdlx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08d302669aa", MAC:"62:05:09:86:79:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:10:14.722591 containerd[1748]: 2025-07-12 00:10:14.715 [INFO][6804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917" Namespace="calico-apiserver" Pod="calico-apiserver-79f66ccc75-ppdlx" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--79f66ccc75--ppdlx-eth0" Jul 12 00:10:14.725196 kubelet[3127]: I0712 00:10:14.723566 3127 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-calico-apiserver-certs\") pod \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\" (UID: \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\") " Jul 12 00:10:14.725196 kubelet[3127]: I0712 00:10:14.723614 3127 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64stz\" (UniqueName: \"kubernetes.io/projected/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-kube-api-access-64stz\") pod \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\" (UID: \"f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e\") " Jul 12 00:10:14.728597 kubelet[3127]: I0712 00:10:14.728455 3127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-kube-api-access-64stz" (OuterVolumeSpecName: "kube-api-access-64stz") pod "f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e" (UID: "f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e"). InnerVolumeSpecName "kube-api-access-64stz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:10:14.734836 kubelet[3127]: I0712 00:10:14.734617 3127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e" (UID: "f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:10:14.756412 containerd[1748]: time="2025-07-12T00:10:14.753705962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:10:14.756412 containerd[1748]: time="2025-07-12T00:10:14.756181919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:10:14.756412 containerd[1748]: time="2025-07-12T00:10:14.756198799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:14.756412 containerd[1748]: time="2025-07-12T00:10:14.756323399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:14.778629 systemd[1]: Started cri-containerd-56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917.scope - libcontainer container 56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917. Jul 12 00:10:14.824609 kubelet[3127]: I0712 00:10:14.824540 3127 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-calico-apiserver-certs\") on node \"ci-4081.3.4-n-047a586f92\" DevicePath \"\"" Jul 12 00:10:14.824609 kubelet[3127]: I0712 00:10:14.824578 3127 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-64stz\" (UniqueName: \"kubernetes.io/projected/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e-kube-api-access-64stz\") on node \"ci-4081.3.4-n-047a586f92\" DevicePath \"\"" Jul 12 00:10:14.949437 systemd[1]: Removed slice kubepods-besteffort-podf86b7bbb_ff83_4e2c_ba5d_1ba824643d5e.slice - libcontainer container kubepods-besteffort-podf86b7bbb_ff83_4e2c_ba5d_1ba824643d5e.slice. Jul 12 00:10:14.979900 containerd[1748]: time="2025-07-12T00:10:14.979615737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f66ccc75-ppdlx,Uid:5527147c-8588-4cbd-9393-db30eb8e9873,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917\"" Jul 12 00:10:14.989856 containerd[1748]: time="2025-07-12T00:10:14.989592205Z" level=info msg="CreateContainer within sandbox \"56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:10:15.041181 containerd[1748]: time="2025-07-12T00:10:15.041131625Z" level=info msg="CreateContainer within sandbox \"56e8a73cc69b2d676f81570eeb2451b96e980bfa7651c54e59d1600ee8bb7917\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3c9088f6a8963464783d6e90d30dc264d5d3e8c8ff3d18f8816839b521a17af\"" Jul 12 00:10:15.042897 containerd[1748]: time="2025-07-12T00:10:15.042591383Z" level=info msg="StartContainer for \"f3c9088f6a8963464783d6e90d30dc264d5d3e8c8ff3d18f8816839b521a17af\"" Jul 12 00:10:15.076492 systemd[1]: Started cri-containerd-f3c9088f6a8963464783d6e90d30dc264d5d3e8c8ff3d18f8816839b521a17af.scope - libcontainer container f3c9088f6a8963464783d6e90d30dc264d5d3e8c8ff3d18f8816839b521a17af. Jul 12 00:10:15.117627 containerd[1748]: time="2025-07-12T00:10:15.117449255Z" level=info msg="StartContainer for \"f3c9088f6a8963464783d6e90d30dc264d5d3e8c8ff3d18f8816839b521a17af\" returns successfully" Jul 12 00:10:15.150210 systemd[1]: run-netns-cni\x2da8c3540b\x2d481f\x2d1995\x2de744\x2d3f1f5ba1d03e.mount: Deactivated successfully. Jul 12 00:10:15.150404 systemd[1]: var-lib-kubelet-pods-f86b7bbb\x2dff83\x2d4e2c\x2dba5d\x2d1ba824643d5e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d64stz.mount: Deactivated successfully. Jul 12 00:10:15.150467 systemd[1]: var-lib-kubelet-pods-f86b7bbb\x2dff83\x2d4e2c\x2dba5d\x2d1ba824643d5e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 12 00:10:15.414296 kubelet[3127]: I0712 00:10:15.412997 3127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f66ccc75-ppdlx" podStartSLOduration=1.412976468 podStartE2EDuration="1.412976468s" podCreationTimestamp="2025-07-12 00:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:10:15.411986149 +0000 UTC m=+90.582696285" watchObservedRunningTime="2025-07-12 00:10:15.412976468 +0000 UTC m=+90.583686564" Jul 12 00:10:15.818408 systemd-networkd[1562]: cali08d302669aa: Gained IPv6LL Jul 12 00:10:16.933922 kubelet[3127]: I0712 00:10:16.933874 3127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e" path="/var/lib/kubelet/pods/f86b7bbb-ff83-4e2c-ba5d-1ba824643d5e/volumes" Jul 12 00:10:17.320690 containerd[1748]: time="2025-07-12T00:10:17.320539909Z" level=info msg="StopContainer for \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\" with timeout 30 (s)" Jul 12 00:10:17.321050 containerd[1748]: time="2025-07-12T00:10:17.320974828Z" level=info msg="Stop container \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\" with signal terminated" Jul 12 00:10:17.363163 systemd[1]: cri-containerd-b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252.scope: Deactivated successfully. Jul 12 00:10:17.363490 systemd[1]: cri-containerd-b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252.scope: Consumed 1.213s CPU time. Jul 12 00:10:17.401358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252-rootfs.mount: Deactivated successfully. Jul 12 00:10:17.404926 containerd[1748]: time="2025-07-12T00:10:17.404860290Z" level=info msg="shim disconnected" id=b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252 namespace=k8s.io Jul 12 00:10:17.404926 containerd[1748]: time="2025-07-12T00:10:17.404919330Z" level=warning msg="cleaning up after shim disconnected" id=b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252 namespace=k8s.io Jul 12 00:10:17.404926 containerd[1748]: time="2025-07-12T00:10:17.404929010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:17.558491 containerd[1748]: time="2025-07-12T00:10:17.558359710Z" level=info msg="StopContainer for \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\" returns successfully" Jul 12 00:10:17.559378 containerd[1748]: time="2025-07-12T00:10:17.559091509Z" level=info msg="StopPodSandbox for \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\"" Jul 12 00:10:17.559378 containerd[1748]: time="2025-07-12T00:10:17.559132389Z" level=info msg="Container to stop \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:10:17.564538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569-shm.mount: Deactivated successfully. Jul 12 00:10:17.638160 systemd[1]: cri-containerd-5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569.scope: Deactivated successfully. Jul 12 00:10:17.673054 containerd[1748]: time="2025-07-12T00:10:17.672989055Z" level=info msg="shim disconnected" id=5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569 namespace=k8s.io Jul 12 00:10:17.673054 containerd[1748]: time="2025-07-12T00:10:17.673046615Z" level=warning msg="cleaning up after shim disconnected" id=5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569 namespace=k8s.io Jul 12 00:10:17.673054 containerd[1748]: time="2025-07-12T00:10:17.673055055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:17.676604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569-rootfs.mount: Deactivated successfully. Jul 12 00:10:17.784721 systemd-networkd[1562]: calidefe61e03cd: Link DOWN Jul 12 00:10:17.784730 systemd-networkd[1562]: calidefe61e03cd: Lost carrier Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.782 [INFO][7001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.782 [INFO][7001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" iface="eth0" netns="/var/run/netns/cni-8e9e9a42-ae94-4a2f-74ef-8481de265ead" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.782 [INFO][7001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" iface="eth0" netns="/var/run/netns/cni-8e9e9a42-ae94-4a2f-74ef-8481de265ead" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.795 [INFO][7001] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" after=13.049584ms iface="eth0" netns="/var/run/netns/cni-8e9e9a42-ae94-4a2f-74ef-8481de265ead" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.795 [INFO][7001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.795 [INFO][7001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.817 [INFO][7011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.819 [INFO][7011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.819 [INFO][7011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.886 [INFO][7011] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.886 [INFO][7011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.888 [INFO][7011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:17.893607 containerd[1748]: 2025-07-12 00:10:17.891 [INFO][7001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:17.895934 containerd[1748]: time="2025-07-12T00:10:17.895883434Z" level=info msg="TearDown network for sandbox \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" successfully" Jul 12 00:10:17.895934 containerd[1748]: time="2025-07-12T00:10:17.895926994Z" level=info msg="StopPodSandbox for \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" returns successfully" Jul 12 00:10:17.900203 systemd[1]: run-netns-cni\x2d8e9e9a42\x2dae94\x2d4a2f\x2d74ef\x2d8481de265ead.mount: Deactivated successfully. Jul 12 00:10:18.048097 kubelet[3127]: I0712 00:10:18.047549 3127 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncl8w\" (UniqueName: \"kubernetes.io/projected/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-kube-api-access-ncl8w\") pod \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\" (UID: \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\") " Jul 12 00:10:18.048097 kubelet[3127]: I0712 00:10:18.047626 3127 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-calico-apiserver-certs\") pod \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\" (UID: \"9c5661c3-a6ae-4537-a0fb-159b28c4d8b2\") " Jul 12 00:10:18.054903 kubelet[3127]: I0712 00:10:18.054861 3127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-kube-api-access-ncl8w" (OuterVolumeSpecName: "kube-api-access-ncl8w") pod "9c5661c3-a6ae-4537-a0fb-159b28c4d8b2" (UID: "9c5661c3-a6ae-4537-a0fb-159b28c4d8b2"). InnerVolumeSpecName "kube-api-access-ncl8w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:10:18.058088 kubelet[3127]: I0712 00:10:18.058015 3127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9c5661c3-a6ae-4537-a0fb-159b28c4d8b2" (UID: "9c5661c3-a6ae-4537-a0fb-159b28c4d8b2"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:10:18.059137 systemd[1]: var-lib-kubelet-pods-9c5661c3\x2da6ae\x2d4537\x2da0fb\x2d159b28c4d8b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncl8w.mount: Deactivated successfully. Jul 12 00:10:18.149030 kubelet[3127]: I0712 00:10:18.148888 3127 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncl8w\" (UniqueName: \"kubernetes.io/projected/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-kube-api-access-ncl8w\") on node \"ci-4081.3.4-n-047a586f92\" DevicePath \"\"" Jul 12 00:10:18.149030 kubelet[3127]: I0712 00:10:18.148921 3127 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2-calico-apiserver-certs\") on node \"ci-4081.3.4-n-047a586f92\" DevicePath \"\"" Jul 12 00:10:18.403832 kubelet[3127]: I0712 00:10:18.402790 3127 scope.go:117] "RemoveContainer" containerID="b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252" Jul 12 00:10:18.404598 systemd[1]: var-lib-kubelet-pods-9c5661c3\x2da6ae\x2d4537\x2da0fb\x2d159b28c4d8b2-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 12 00:10:18.407946 containerd[1748]: time="2025-07-12T00:10:18.407895233Z" level=info msg="RemoveContainer for \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\"" Jul 12 00:10:18.417532 systemd[1]: Removed slice kubepods-besteffort-pod9c5661c3_a6ae_4537_a0fb_159b28c4d8b2.slice - libcontainer container kubepods-besteffort-pod9c5661c3_a6ae_4537_a0fb_159b28c4d8b2.slice. Jul 12 00:10:18.417678 systemd[1]: kubepods-besteffort-pod9c5661c3_a6ae_4537_a0fb_159b28c4d8b2.slice: Consumed 1.227s CPU time. Jul 12 00:10:18.420438 containerd[1748]: time="2025-07-12T00:10:18.419633419Z" level=info msg="RemoveContainer for \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\" returns successfully" Jul 12 00:10:18.421212 kubelet[3127]: I0712 00:10:18.421177 3127 scope.go:117] "RemoveContainer" containerID="b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252" Jul 12 00:10:18.423158 containerd[1748]: time="2025-07-12T00:10:18.423077615Z" level=error msg="ContainerStatus for \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\": not found" Jul 12 00:10:18.423869 kubelet[3127]: E0712 00:10:18.423270 3127 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\": not found" containerID="b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252" Jul 12 00:10:18.423869 kubelet[3127]: I0712 00:10:18.423309 3127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252"} err="failed to get container status \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3ee9725dcc468c6f1f9e463ff3afbb0f202d43620d8012b4dc9fcbb9cddc252\": not found" Jul 12 00:10:18.933957 kubelet[3127]: I0712 00:10:18.933553 3127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c5661c3-a6ae-4537-a0fb-159b28c4d8b2" path="/var/lib/kubelet/pods/9c5661c3-a6ae-4537-a0fb-159b28c4d8b2/volumes" Jul 12 00:10:47.447559 kubelet[3127]: I0712 00:10:47.447505 3127 scope.go:117] "RemoveContainer" containerID="0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041" Jul 12 00:10:47.450660 containerd[1748]: time="2025-07-12T00:10:47.450509495Z" level=info msg="RemoveContainer for \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\"" Jul 12 00:10:47.464056 containerd[1748]: time="2025-07-12T00:10:47.463998680Z" level=info msg="RemoveContainer for \"0de2a2a76741027de802c62377d3ba91c519ee902851d0ae9ce2aa5a2bd7b041\" returns successfully" Jul 12 00:10:47.465932 containerd[1748]: time="2025-07-12T00:10:47.465898038Z" level=info msg="StopPodSandbox for \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\"" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.505 [WARNING][7111] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.505 [INFO][7111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.505 [INFO][7111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" iface="eth0" netns="" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.505 [INFO][7111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.505 [INFO][7111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.532 [INFO][7118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.532 [INFO][7118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.532 [INFO][7118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.541 [WARNING][7118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.541 [INFO][7118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.542 [INFO][7118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:47.546297 containerd[1748]: 2025-07-12 00:10:47.544 [INFO][7111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.546297 containerd[1748]: time="2025-07-12T00:10:47.546165386Z" level=info msg="TearDown network for sandbox \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" successfully" Jul 12 00:10:47.546297 containerd[1748]: time="2025-07-12T00:10:47.546191906Z" level=info msg="StopPodSandbox for \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" returns successfully" Jul 12 00:10:47.547244 containerd[1748]: time="2025-07-12T00:10:47.547184985Z" level=info msg="RemovePodSandbox for \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\"" Jul 12 00:10:47.547244 containerd[1748]: time="2025-07-12T00:10:47.547234025Z" level=info msg="Forcibly stopping sandbox \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\"" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.589 [WARNING][7133] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.589 [INFO][7133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.589 [INFO][7133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" iface="eth0" netns="" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.589 [INFO][7133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.589 [INFO][7133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.611 [INFO][7140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.611 [INFO][7140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.611 [INFO][7140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.620 [WARNING][7140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.620 [INFO][7140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" HandleID="k8s-pod-network.0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--s85hw-eth0" Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.622 [INFO][7140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:47.625556 containerd[1748]: 2025-07-12 00:10:47.623 [INFO][7133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7" Jul 12 00:10:47.625969 containerd[1748]: time="2025-07-12T00:10:47.625565215Z" level=info msg="TearDown network for sandbox \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" successfully" Jul 12 00:10:47.640667 containerd[1748]: time="2025-07-12T00:10:47.640565838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:10:47.640815 containerd[1748]: time="2025-07-12T00:10:47.640710998Z" level=info msg="RemovePodSandbox \"0764513702e113c783009700f2d7d93c9816dfb3e1591e0c8908cf0e749eedb7\" returns successfully" Jul 12 00:10:47.641268 containerd[1748]: time="2025-07-12T00:10:47.641217397Z" level=info msg="StopPodSandbox for \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\"" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.684 [WARNING][7154] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.685 [INFO][7154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.685 [INFO][7154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" iface="eth0" netns="" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.685 [INFO][7154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.685 [INFO][7154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.710 [INFO][7161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.710 [INFO][7161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.710 [INFO][7161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.721 [WARNING][7161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.721 [INFO][7161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.723 [INFO][7161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:47.726574 containerd[1748]: 2025-07-12 00:10:47.725 [INFO][7154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.727347 containerd[1748]: time="2025-07-12T00:10:47.727049859Z" level=info msg="TearDown network for sandbox \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" successfully" Jul 12 00:10:47.727347 containerd[1748]: time="2025-07-12T00:10:47.727099539Z" level=info msg="StopPodSandbox for \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" returns successfully" Jul 12 00:10:47.727737 containerd[1748]: time="2025-07-12T00:10:47.727700978Z" level=info msg="RemovePodSandbox for \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\"" Jul 12 00:10:47.727837 containerd[1748]: time="2025-07-12T00:10:47.727760978Z" level=info msg="Forcibly stopping sandbox \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\"" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.764 [WARNING][7175] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" WorkloadEndpoint="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.764 [INFO][7175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.764 [INFO][7175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" iface="eth0" netns="" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.764 [INFO][7175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.764 [INFO][7175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.785 [INFO][7182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.785 [INFO][7182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.786 [INFO][7182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.795 [WARNING][7182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.795 [INFO][7182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" HandleID="k8s-pod-network.5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Workload="ci--4081.3.4--n--047a586f92-k8s-calico--apiserver--7fc958d55f--sgjtg-eth0" Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.796 [INFO][7182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:47.800276 containerd[1748]: 2025-07-12 00:10:47.798 [INFO][7175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569" Jul 12 00:10:47.800741 containerd[1748]: time="2025-07-12T00:10:47.800394855Z" level=info msg="TearDown network for sandbox \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" successfully" Jul 12 00:10:47.811328 containerd[1748]: time="2025-07-12T00:10:47.811268843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:10:47.811497 containerd[1748]: time="2025-07-12T00:10:47.811361043Z" level=info msg="RemovePodSandbox \"5bcc4fabd5d0ccf5990ab36573ee088ad3526a2a5ff94aa1a3d9de860b046569\" returns successfully" Jul 12 00:11:33.489603 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.16.10:47956.service - OpenSSH per-connection server daemon (10.200.16.10:47956). Jul 12 00:11:33.947774 sshd[7352]: Accepted publickey for core from 10.200.16.10 port 47956 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:11:33.949934 sshd[7352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:33.954646 systemd-logind[1686]: New session 10 of user core. Jul 12 00:11:33.962457 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:11:34.374856 sshd[7352]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:34.379758 systemd[1]: sshd@7-10.200.20.17:22-10.200.16.10:47956.service: Deactivated successfully. Jul 12 00:11:34.382965 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:11:34.384208 systemd-logind[1686]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:11:34.386336 systemd-logind[1686]: Removed session 10. Jul 12 00:11:39.466987 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.16.10:47968.service - OpenSSH per-connection server daemon (10.200.16.10:47968). Jul 12 00:11:39.919477 sshd[7385]: Accepted publickey for core from 10.200.16.10 port 47968 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:11:39.921504 sshd[7385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:39.926450 systemd-logind[1686]: New session 11 of user core. Jul 12 00:11:39.931528 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:11:40.343530 sshd[7385]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:40.347648 systemd[1]: sshd@8-10.200.20.17:22-10.200.16.10:47968.service: Deactivated successfully. Jul 12 00:11:40.350223 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:11:40.350937 systemd-logind[1686]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:11:40.352527 systemd-logind[1686]: Removed session 11. Jul 12 00:11:45.434572 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.16.10:40980.service - OpenSSH per-connection server daemon (10.200.16.10:40980). Jul 12 00:11:45.884800 sshd[7422]: Accepted publickey for core from 10.200.16.10 port 40980 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:11:45.886384 sshd[7422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:45.892393 systemd-logind[1686]: New session 12 of user core. Jul 12 00:11:45.900506 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:11:46.294956 sshd[7422]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:46.299293 systemd[1]: sshd@9-10.200.20.17:22-10.200.16.10:40980.service: Deactivated successfully. Jul 12 00:11:46.301981 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:11:46.303336 systemd-logind[1686]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:11:46.305006 systemd-logind[1686]: Removed session 12. Jul 12 00:11:51.386297 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.16.10:48304.service - OpenSSH per-connection server daemon (10.200.16.10:48304). Jul 12 00:11:51.842075 sshd[7456]: Accepted publickey for core from 10.200.16.10 port 48304 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:11:51.843605 sshd[7456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:51.849325 systemd-logind[1686]: New session 13 of user core. Jul 12 00:11:51.855469 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:11:52.252538 sshd[7456]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:52.256818 systemd[1]: sshd@10-10.200.20.17:22-10.200.16.10:48304.service: Deactivated successfully. Jul 12 00:11:52.259192 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:11:52.260676 systemd-logind[1686]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:11:52.261809 systemd-logind[1686]: Removed session 13. Jul 12 00:11:52.335616 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.16.10:48312.service - OpenSSH per-connection server daemon (10.200.16.10:48312). Jul 12 00:11:52.795427 sshd[7470]: Accepted publickey for core from 10.200.16.10 port 48312 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:11:52.796972 sshd[7470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:52.801179 systemd-logind[1686]: New session 14 of user core. Jul 12 00:11:52.807508 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:11:53.234843 sshd[7470]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:53.238895 systemd[1]: sshd@11-10.200.20.17:22-10.200.16.10:48312.service: Deactivated successfully. Jul 12 00:11:53.241790 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:11:53.243046 systemd-logind[1686]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:11:53.244214 systemd-logind[1686]: Removed session 14. Jul 12 00:11:53.328588 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.16.10:48314.service - OpenSSH per-connection server daemon (10.200.16.10:48314). Jul 12 00:11:53.802324 sshd[7483]: Accepted publickey for core from 10.200.16.10 port 48314 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:11:53.803875 sshd[7483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:53.808809 systemd-logind[1686]: New session 15 of user core. Jul 12 00:11:53.815459 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:11:54.216933 sshd[7483]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:54.221030 systemd[1]: sshd@12-10.200.20.17:22-10.200.16.10:48314.service: Deactivated successfully. Jul 12 00:11:54.223857 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:11:54.226471 systemd-logind[1686]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:11:54.227711 systemd-logind[1686]: Removed session 15. Jul 12 00:11:59.311631 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.16.10:48324.service - OpenSSH per-connection server daemon (10.200.16.10:48324). Jul 12 00:11:59.799950 sshd[7519]: Accepted publickey for core from 10.200.16.10 port 48324 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:11:59.802059 sshd[7519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:59.806689 systemd-logind[1686]: New session 16 of user core. Jul 12 00:11:59.814644 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:12:00.229588 sshd[7519]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:00.234043 systemd[1]: sshd@13-10.200.20.17:22-10.200.16.10:48324.service: Deactivated successfully. Jul 12 00:12:00.234146 systemd-logind[1686]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:12:00.237061 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:12:00.238397 systemd-logind[1686]: Removed session 16. Jul 12 00:12:05.310548 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.16.10:43932.service - OpenSSH per-connection server daemon (10.200.16.10:43932). Jul 12 00:12:05.739571 sshd[7531]: Accepted publickey for core from 10.200.16.10 port 43932 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:05.741801 sshd[7531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:05.746511 systemd-logind[1686]: New session 17 of user core. Jul 12 00:12:05.751480 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:12:06.144196 sshd[7531]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:06.147713 systemd-logind[1686]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:12:06.148730 systemd[1]: sshd@14-10.200.20.17:22-10.200.16.10:43932.service: Deactivated successfully. Jul 12 00:12:06.152090 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:12:06.154501 systemd-logind[1686]: Removed session 17. Jul 12 00:12:11.236285 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.16.10:41032.service - OpenSSH per-connection server daemon (10.200.16.10:41032). Jul 12 00:12:11.719729 sshd[7605]: Accepted publickey for core from 10.200.16.10 port 41032 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:11.721619 sshd[7605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:11.727361 systemd-logind[1686]: New session 18 of user core. Jul 12 00:12:11.729455 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:12:12.134088 sshd[7605]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:12.138173 systemd[1]: sshd@15-10.200.20.17:22-10.200.16.10:41032.service: Deactivated successfully. Jul 12 00:12:12.140893 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:12:12.141892 systemd-logind[1686]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:12:12.143078 systemd-logind[1686]: Removed session 18. Jul 12 00:12:12.245635 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.16.10:41048.service - OpenSSH per-connection server daemon (10.200.16.10:41048). Jul 12 00:12:12.692898 sshd[7618]: Accepted publickey for core from 10.200.16.10 port 41048 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:12.694498 sshd[7618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:12.699329 systemd-logind[1686]: New session 19 of user core. Jul 12 00:12:12.707479 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:12:13.259211 sshd[7618]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:13.262919 systemd[1]: sshd@16-10.200.20.17:22-10.200.16.10:41048.service: Deactivated successfully. Jul 12 00:12:13.265582 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:12:13.266598 systemd-logind[1686]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:12:13.267617 systemd-logind[1686]: Removed session 19. Jul 12 00:12:13.345571 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.16.10:41062.service - OpenSSH per-connection server daemon (10.200.16.10:41062). Jul 12 00:12:13.778569 sshd[7629]: Accepted publickey for core from 10.200.16.10 port 41062 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:13.780187 sshd[7629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:13.784581 systemd-logind[1686]: New session 20 of user core. Jul 12 00:12:13.793496 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:12:14.783616 sshd[7629]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:14.788538 systemd[1]: sshd@17-10.200.20.17:22-10.200.16.10:41062.service: Deactivated successfully. Jul 12 00:12:14.791699 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:12:14.792616 systemd-logind[1686]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:12:14.794847 systemd-logind[1686]: Removed session 20. Jul 12 00:12:14.871577 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.16.10:41066.service - OpenSSH per-connection server daemon (10.200.16.10:41066). Jul 12 00:12:15.325390 sshd[7648]: Accepted publickey for core from 10.200.16.10 port 41066 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:15.327485 sshd[7648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:15.331763 systemd-logind[1686]: New session 21 of user core. Jul 12 00:12:15.340499 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:12:15.865952 sshd[7648]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:15.869901 systemd[1]: sshd@18-10.200.20.17:22-10.200.16.10:41066.service: Deactivated successfully. Jul 12 00:12:15.872318 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:12:15.873493 systemd-logind[1686]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:12:15.875709 systemd-logind[1686]: Removed session 21. Jul 12 00:12:15.956562 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.16.10:41070.service - OpenSSH per-connection server daemon (10.200.16.10:41070). Jul 12 00:12:16.403407 sshd[7659]: Accepted publickey for core from 10.200.16.10 port 41070 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:16.405023 sshd[7659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:16.411029 systemd-logind[1686]: New session 22 of user core. Jul 12 00:12:16.415472 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:12:16.814202 sshd[7659]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:16.817848 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:12:16.818607 systemd[1]: sshd@19-10.200.20.17:22-10.200.16.10:41070.service: Deactivated successfully. Jul 12 00:12:16.822728 systemd-logind[1686]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:12:16.823952 systemd-logind[1686]: Removed session 22. Jul 12 00:12:21.901626 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.16.10:36294.service - OpenSSH per-connection server daemon (10.200.16.10:36294). Jul 12 00:12:22.351186 sshd[7700]: Accepted publickey for core from 10.200.16.10 port 36294 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:22.352723 sshd[7700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:22.357245 systemd-logind[1686]: New session 23 of user core. Jul 12 00:12:22.361487 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:12:22.760521 sshd[7700]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:22.764008 systemd-logind[1686]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:12:22.764738 systemd[1]: sshd@20-10.200.20.17:22-10.200.16.10:36294.service: Deactivated successfully. Jul 12 00:12:22.767204 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:12:22.768954 systemd-logind[1686]: Removed session 23. Jul 12 00:12:27.852456 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.16.10:36296.service - OpenSSH per-connection server daemon (10.200.16.10:36296). Jul 12 00:12:28.304663 sshd[7715]: Accepted publickey for core from 10.200.16.10 port 36296 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:28.306288 sshd[7715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:28.313491 systemd-logind[1686]: New session 24 of user core. Jul 12 00:12:28.319515 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:12:28.764182 sshd[7715]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:28.768690 systemd-logind[1686]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:12:28.770926 systemd[1]: sshd@21-10.200.20.17:22-10.200.16.10:36296.service: Deactivated successfully. Jul 12 00:12:28.775909 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:12:28.777965 systemd-logind[1686]: Removed session 24. Jul 12 00:12:33.844622 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.16.10:46782.service - OpenSSH per-connection server daemon (10.200.16.10:46782). Jul 12 00:12:34.276112 sshd[7728]: Accepted publickey for core from 10.200.16.10 port 46782 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:34.277712 sshd[7728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:34.282287 systemd-logind[1686]: New session 25 of user core. Jul 12 00:12:34.290522 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:12:34.676543 sshd[7728]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:34.681161 systemd[1]: sshd@22-10.200.20.17:22-10.200.16.10:46782.service: Deactivated successfully. Jul 12 00:12:34.683174 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:12:34.684727 systemd-logind[1686]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:12:34.686590 systemd-logind[1686]: Removed session 25. Jul 12 00:12:38.242390 systemd[1]: run-containerd-runc-k8s.io-711629c4fc57817b644db027a4c4ba677150a8eb7f999193ae47ac853429ab73-runc.hZGj0D.mount: Deactivated successfully. Jul 12 00:12:39.766801 systemd[1]: Started sshd@23-10.200.20.17:22-10.200.16.10:46956.service - OpenSSH per-connection server daemon (10.200.16.10:46956). Jul 12 00:12:40.232396 sshd[7766]: Accepted publickey for core from 10.200.16.10 port 46956 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:40.234137 sshd[7766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:40.238396 systemd-logind[1686]: New session 26 of user core. Jul 12 00:12:40.243574 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:12:40.651660 sshd[7766]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:40.655840 systemd[1]: sshd@23-10.200.20.17:22-10.200.16.10:46956.service: Deactivated successfully. Jul 12 00:12:40.660076 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:12:40.665423 systemd-logind[1686]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:12:40.667667 systemd-logind[1686]: Removed session 26. Jul 12 00:12:41.046054 systemd[1]: run-containerd-runc-k8s.io-a1c4b35a14977c444a775b45e1dd5ff4e84168abe122e0f17b4dd6db60476dd9-runc.cYGyYH.mount: Deactivated successfully. Jul 12 00:12:45.746662 systemd[1]: Started sshd@24-10.200.20.17:22-10.200.16.10:46962.service - OpenSSH per-connection server daemon (10.200.16.10:46962). Jul 12 00:12:46.239010 sshd[7817]: Accepted publickey for core from 10.200.16.10 port 46962 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:12:46.240829 sshd[7817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:46.246195 systemd-logind[1686]: New session 27 of user core. Jul 12 00:12:46.251453 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:12:46.654454 sshd[7817]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:46.659587 systemd[1]: sshd@24-10.200.20.17:22-10.200.16.10:46962.service: Deactivated successfully. Jul 12 00:12:46.663219 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:12:46.664671 systemd-logind[1686]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:12:46.665720 systemd-logind[1686]: Removed session 27.