Apr 30 00:36:40.281908 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:36:40.281928 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:36:40.281937 kernel: KASLR enabled Apr 30 00:36:40.281943 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 30 00:36:40.281950 kernel: printk: bootconsole [pl11] enabled Apr 30 00:36:40.281955 kernel: efi: EFI v2.7 by EDK II Apr 30 00:36:40.281963 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 30 00:36:40.281969 kernel: random: crng init done Apr 30 00:36:40.281975 kernel: ACPI: Early table checksum verification disabled Apr 30 00:36:40.281981 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 30 00:36:40.281988 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.281994 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282001 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 30 00:36:40.282007 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282015 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282021 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282028 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282035 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282042 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282048 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 30 00:36:40.282055 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282061 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 30 00:36:40.282067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 30 00:36:40.282074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 30 00:36:40.282080 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 30 00:36:40.282086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 30 00:36:40.282093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 30 00:36:40.282099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 30 00:36:40.282107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 30 00:36:40.282113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 30 00:36:40.282120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 30 00:36:40.282126 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 30 00:36:40.282132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 30 00:36:40.282139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 30 00:36:40.282145 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 30 00:36:40.282151 kernel: Zone ranges: Apr 30 00:36:40.282157 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 30 00:36:40.282164 kernel: DMA32 empty Apr 30 00:36:40.282170 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:40.282176 kernel: Movable zone start for each node Apr 30 00:36:40.282186 kernel: Early memory node ranges Apr 30 00:36:40.282193 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 30 00:36:40.282200 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 30 00:36:40.282207 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 30 00:36:40.282213 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 30 00:36:40.282221 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 30 00:36:40.282228 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 30 00:36:40.282235 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:40.282242 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 30 00:36:40.282249 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 30 00:36:40.282255 kernel: psci: probing for conduit method from ACPI. Apr 30 00:36:40.282262 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:36:40.282269 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:36:40.282276 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 30 00:36:40.282282 kernel: psci: SMC Calling Convention v1.4 Apr 30 00:36:40.282289 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 30 00:36:40.282311 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 30 00:36:40.282320 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:36:40.282327 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:36:40.282334 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:36:40.282341 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:36:40.282348 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:36:40.282355 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:36:40.282361 kernel: CPU features: detected: Spectre-BHB Apr 30 00:36:40.282368 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:36:40.282375 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:36:40.282381 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:36:40.282388 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 30 00:36:40.282396 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:36:40.282403 kernel: alternatives: applying boot alternatives Apr 30 00:36:40.282411 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:40.282418 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:36:40.282425 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:36:40.282432 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:36:40.282439 kernel: Fallback order for Node 0: 0 Apr 30 00:36:40.282446 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 30 00:36:40.282452 kernel: Policy zone: Normal Apr 30 00:36:40.282459 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:36:40.282466 kernel: software IO TLB: area num 2. Apr 30 00:36:40.282474 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 30 00:36:40.282481 kernel: Memory: 3982692K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211468K reserved, 0K cma-reserved) Apr 30 00:36:40.282488 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:36:40.282494 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:36:40.282502 kernel: rcu: RCU event tracing is enabled. Apr 30 00:36:40.282509 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:36:40.282516 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:36:40.282523 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:36:40.282529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:36:40.282536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:36:40.282543 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:36:40.282551 kernel: GICv3: 960 SPIs implemented Apr 30 00:36:40.282558 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:36:40.282564 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:36:40.282571 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:36:40.282578 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 30 00:36:40.282584 kernel: ITS: No ITS available, not enabling LPIs Apr 30 00:36:40.282591 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:36:40.282598 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:40.282605 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:36:40.282612 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:36:40.282619 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:36:40.282627 kernel: Console: colour dummy device 80x25 Apr 30 00:36:40.282634 kernel: printk: console [tty1] enabled Apr 30 00:36:40.282641 kernel: ACPI: Core revision 20230628 Apr 30 00:36:40.282648 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:36:40.282655 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:36:40.282662 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:36:40.282669 kernel: landlock: Up and running. Apr 30 00:36:40.282676 kernel: SELinux: Initializing. Apr 30 00:36:40.282683 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.282690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.282698 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:40.282705 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:40.282712 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 30 00:36:40.282719 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Apr 30 00:36:40.282726 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 00:36:40.282733 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:36:40.282740 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:36:40.282753 kernel: Remapping and enabling EFI services. Apr 30 00:36:40.282760 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:36:40.282767 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:36:40.282774 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 30 00:36:40.282783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:40.282790 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:36:40.282797 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:36:40.282804 kernel: SMP: Total of 2 processors activated. Apr 30 00:36:40.282812 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:36:40.282820 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 30 00:36:40.282828 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:36:40.282835 kernel: CPU features: detected: CRC32 instructions Apr 30 00:36:40.282842 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:36:40.282849 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:36:40.282857 kernel: CPU features: detected: Privileged Access Never Apr 30 00:36:40.282864 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:36:40.282871 kernel: alternatives: applying system-wide alternatives Apr 30 00:36:40.282878 kernel: devtmpfs: initialized Apr 30 00:36:40.282887 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:36:40.282894 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:36:40.282902 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:36:40.282909 kernel: SMBIOS 3.1.0 present. Apr 30 00:36:40.282916 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Apr 30 00:36:40.282923 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:36:40.282931 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:36:40.282938 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:36:40.282945 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:36:40.282954 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:36:40.282961 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 30 00:36:40.282969 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:36:40.282976 kernel: cpuidle: using governor menu Apr 30 00:36:40.282983 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:36:40.282990 kernel: ASID allocator initialised with 32768 entries Apr 30 00:36:40.282998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:36:40.283005 kernel: Serial: AMBA PL011 UART driver Apr 30 00:36:40.283012 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:36:40.283021 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:36:40.283028 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:36:40.283035 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:36:40.283043 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:36:40.283050 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:36:40.283057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:36:40.283064 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:36:40.283072 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:36:40.283079 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:36:40.283087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:36:40.283095 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:36:40.283102 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:36:40.283109 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:36:40.283116 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:36:40.283124 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:36:40.283131 kernel: ACPI: Interpreter enabled Apr 30 00:36:40.283138 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:36:40.283145 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:36:40.283154 kernel: printk: console [ttyAMA0] enabled Apr 30 00:36:40.283161 kernel: printk: bootconsole [pl11] disabled Apr 30 00:36:40.283168 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 30 00:36:40.283175 kernel: iommu: Default domain type: Translated Apr 30 00:36:40.283183 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:36:40.283190 kernel: efivars: Registered efivars operations Apr 30 00:36:40.283197 kernel: vgaarb: loaded Apr 30 00:36:40.283204 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:36:40.283211 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:36:40.283220 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:36:40.283227 kernel: pnp: PnP ACPI init Apr 30 00:36:40.283234 kernel: pnp: PnP ACPI: found 0 devices Apr 30 00:36:40.283241 kernel: NET: Registered PF_INET protocol family Apr 30 00:36:40.283249 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:36:40.283256 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:36:40.283264 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:36:40.283271 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:36:40.283278 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:36:40.283287 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:36:40.288349 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.288363 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.288371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:36:40.288379 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:36:40.288386 kernel: kvm [1]: HYP mode not available Apr 30 00:36:40.288394 kernel: Initialise system trusted keyrings Apr 30 00:36:40.288402 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:36:40.288409 kernel: Key type asymmetric registered Apr 30 00:36:40.288421 kernel: Asymmetric key parser 'x509' registered Apr 30 00:36:40.288429 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:36:40.288437 kernel: io scheduler mq-deadline registered Apr 30 00:36:40.288444 kernel: io scheduler kyber registered Apr 30 00:36:40.288451 kernel: io scheduler bfq registered Apr 30 00:36:40.288459 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:36:40.288466 kernel: thunder_xcv, ver 1.0 Apr 30 00:36:40.288473 kernel: thunder_bgx, ver 1.0 Apr 30 00:36:40.288480 kernel: nicpf, ver 1.0 Apr 30 00:36:40.288487 kernel: nicvf, ver 1.0 Apr 30 00:36:40.288622 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:36:40.288695 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:36:39 UTC (1745973399) Apr 30 00:36:40.288705 kernel: efifb: probing for efifb Apr 30 00:36:40.288713 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 00:36:40.288721 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 00:36:40.288741 kernel: efifb: scrolling: redraw Apr 30 00:36:40.288749 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 00:36:40.288758 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:36:40.288766 kernel: fb0: EFI VGA frame buffer device Apr 30 00:36:40.288773 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 30 00:36:40.288781 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:36:40.288788 kernel: No ACPI PMU IRQ for CPU0 Apr 30 00:36:40.288795 kernel: No ACPI PMU IRQ for CPU1 Apr 30 00:36:40.288802 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 30 00:36:40.288810 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:36:40.288817 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:36:40.288826 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:36:40.288834 kernel: Segment Routing with IPv6 Apr 30 00:36:40.288841 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:36:40.288848 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:36:40.288855 kernel: Key type dns_resolver registered Apr 30 00:36:40.288869 kernel: registered taskstats version 1 Apr 30 00:36:40.288876 kernel: Loading compiled-in X.509 certificates Apr 30 00:36:40.288884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:36:40.288891 kernel: Key type .fscrypt registered Apr 30 00:36:40.288900 kernel: Key type fscrypt-provisioning registered Apr 30 00:36:40.288908 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:36:40.288915 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:36:40.288922 kernel: ima: No architecture policies found Apr 30 00:36:40.288930 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:36:40.288937 kernel: clk: Disabling unused clocks Apr 30 00:36:40.288944 kernel: Freeing unused kernel memory: 39424K Apr 30 00:36:40.288952 kernel: Run /init as init process Apr 30 00:36:40.288959 kernel: with arguments: Apr 30 00:36:40.288967 kernel: /init Apr 30 00:36:40.288974 kernel: with environment: Apr 30 00:36:40.288981 kernel: HOME=/ Apr 30 00:36:40.288989 kernel: TERM=linux Apr 30 00:36:40.288996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:36:40.289006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:36:40.289015 systemd[1]: Detected virtualization microsoft. Apr 30 00:36:40.289023 systemd[1]: Detected architecture arm64. Apr 30 00:36:40.289032 systemd[1]: Running in initrd. Apr 30 00:36:40.289040 systemd[1]: No hostname configured, using default hostname. Apr 30 00:36:40.289048 systemd[1]: Hostname set to . Apr 30 00:36:40.289056 systemd[1]: Initializing machine ID from random generator. Apr 30 00:36:40.289064 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:36:40.289072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:40.289080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:40.289088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:36:40.289098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:36:40.289106 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:36:40.289114 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:36:40.289123 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:36:40.289131 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:36:40.289140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:40.289148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:40.289157 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:36:40.289165 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:36:40.289178 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:36:40.289186 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:36:40.289194 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:40.289202 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:40.289210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:36:40.289218 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:36:40.289228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:40.289236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:40.289244 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:40.289252 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:36:40.289260 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:36:40.289268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:36:40.289276 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:36:40.289284 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:36:40.289314 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:36:40.289326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:36:40.289350 systemd-journald[217]: Collecting audit messages is disabled. Apr 30 00:36:40.289369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:40.289377 systemd-journald[217]: Journal started Apr 30 00:36:40.289397 systemd-journald[217]: Runtime Journal (/run/log/journal/c03311371a094c5189fbc5a3851284d6) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:36:40.289980 systemd-modules-load[218]: Inserted module 'overlay' Apr 30 00:36:40.305676 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:36:40.308951 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:40.319048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:40.353707 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:36:40.353730 kernel: Bridge firewalling registered Apr 30 00:36:40.342786 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:36:40.357952 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 30 00:36:40.358882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:40.368668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:40.394527 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:40.414488 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:36:40.430350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:36:40.438424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:36:40.444899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:40.477533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:40.485665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:40.498752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:40.526427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:36:40.533457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:36:40.552768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:36:40.567606 dracut-cmdline[251]: dracut-dracut-053 Apr 30 00:36:40.567606 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:40.614200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:40.623570 systemd-resolved[252]: Positive Trust Anchors: Apr 30 00:36:40.623579 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:36:40.623610 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:36:40.625871 systemd-resolved[252]: Defaulting to hostname 'linux'. Apr 30 00:36:40.634148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:36:40.710572 kernel: SCSI subsystem initialized Apr 30 00:36:40.710594 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:36:40.645044 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:40.731312 kernel: iscsi: registered transport (tcp) Apr 30 00:36:40.747100 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:36:40.747141 kernel: QLogic iSCSI HBA Driver Apr 30 00:36:40.784716 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:40.799488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:36:40.828330 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:36:40.828392 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:36:40.838531 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:36:40.887323 kernel: raid6: neonx8 gen() 15761 MB/s Apr 30 00:36:40.907313 kernel: raid6: neonx4 gen() 15659 MB/s Apr 30 00:36:40.927307 kernel: raid6: neonx2 gen() 13234 MB/s Apr 30 00:36:40.948306 kernel: raid6: neonx1 gen() 10498 MB/s Apr 30 00:36:40.968307 kernel: raid6: int64x8 gen() 6963 MB/s Apr 30 00:36:40.988305 kernel: raid6: int64x4 gen() 7334 MB/s Apr 30 00:36:41.009305 kernel: raid6: int64x2 gen() 6127 MB/s Apr 30 00:36:41.032657 kernel: raid6: int64x1 gen() 5059 MB/s Apr 30 00:36:41.032668 kernel: raid6: using algorithm neonx8 gen() 15761 MB/s Apr 30 00:36:41.056777 kernel: raid6: .... xor() 11938 MB/s, rmw enabled Apr 30 00:36:41.056794 kernel: raid6: using neon recovery algorithm Apr 30 00:36:41.069108 kernel: xor: measuring software checksum speed Apr 30 00:36:41.069132 kernel: 8regs : 19793 MB/sec Apr 30 00:36:41.072615 kernel: 32regs : 19613 MB/sec Apr 30 00:36:41.076260 kernel: arm64_neon : 26954 MB/sec Apr 30 00:36:41.080367 kernel: xor: using function: arm64_neon (26954 MB/sec) Apr 30 00:36:41.131317 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:36:41.141967 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:41.157453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:41.179480 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 30 00:36:41.185578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:41.206423 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:36:41.230572 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Apr 30 00:36:41.255958 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:41.274496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:36:41.310075 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:41.329746 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:36:41.354344 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:41.371266 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:41.384418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:41.408209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:36:41.429323 kernel: hv_vmbus: Vmbus version:5.3 Apr 30 00:36:41.430474 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:36:41.458009 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 00:36:41.458067 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 00:36:41.451105 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:41.499323 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 00:36:41.499349 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 00:36:41.499361 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 00:36:41.499371 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 00:36:41.504320 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 00:36:41.504355 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 00:36:41.543222 kernel: PTP clock support registered Apr 30 00:36:41.543239 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 00:36:41.543249 kernel: hv_vmbus: registering driver hv_utils Apr 30 00:36:41.512496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:41.465236 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 00:36:41.471173 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 00:36:41.471188 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 00:36:41.471196 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 00:36:41.471206 kernel: scsi host0: storvsc_host_t Apr 30 00:36:41.471334 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 00:36:41.471356 systemd-journald[217]: Time jumped backwards, rotating. Apr 30 00:36:41.471409 kernel: scsi host1: storvsc_host_t Apr 30 00:36:41.512657 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:41.486356 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: VF slot 1 added Apr 30 00:36:41.527855 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:41.511190 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 00:36:41.542593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:41.542797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:41.446013 systemd-resolved[252]: Clock change detected. Flushing caches. Apr 30 00:36:41.446341 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:41.467132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:41.579678 kernel: hv_vmbus: registering driver hv_pci Apr 30 00:36:41.579704 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 00:36:41.613755 kernel: hv_pci ae81ee55-282c-4f2c-94a1-aa25a61e5e7b: PCI VMBus probing: Using version 0x10004 Apr 30 00:36:41.710614 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:36:41.710640 kernel: hv_pci ae81ee55-282c-4f2c-94a1-aa25a61e5e7b: PCI host bridge to bus 282c:00 Apr 30 00:36:41.710760 kernel: pci_bus 282c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 30 00:36:41.710873 kernel: pci_bus 282c:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 00:36:41.710951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 00:36:41.711048 kernel: pci 282c:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 30 00:36:41.711190 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 00:36:41.711287 kernel: pci 282c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:41.711413 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 00:36:41.711508 kernel: pci 282c:00:02.0: enabling Extended Tags Apr 30 00:36:41.711595 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:36:41.711692 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 00:36:41.711797 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 00:36:41.711882 kernel: pci 282c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 282c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 30 00:36:41.711989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:41.711999 kernel: pci_bus 282c:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 00:36:41.712081 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:36:41.712164 kernel: pci 282c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:41.511453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:41.531583 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:41.591847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:41.769478 kernel: mlx5_core 282c:00:02.0: enabling device (0000 -> 0002) Apr 30 00:36:41.986670 kernel: mlx5_core 282c:00:02.0: firmware version: 16.30.1284 Apr 30 00:36:41.986808 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: VF registering: eth1 Apr 30 00:36:41.986915 kernel: mlx5_core 282c:00:02.0 eth1: joined to eth0 Apr 30 00:36:41.987013 kernel: mlx5_core 282c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 30 00:36:41.996392 kernel: mlx5_core 282c:00:02.0 enP10284s1: renamed from eth1 Apr 30 00:36:42.213930 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 00:36:42.300399 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (501) Apr 30 00:36:42.317509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:36:42.330196 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 00:36:42.386406 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (497) Apr 30 00:36:42.399688 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 00:36:42.406955 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 00:36:42.439638 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:36:42.465395 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:42.473392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:43.482013 disk-uuid[600]: The operation has completed successfully. Apr 30 00:36:43.487190 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:43.534274 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:36:43.534398 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:36:43.566519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:36:43.581632 sh[686]: Success Apr 30 00:36:43.617401 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:36:43.800457 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:36:43.821799 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:36:43.827727 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:36:43.866486 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:36:43.866551 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:43.873723 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:36:43.878641 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:36:43.882843 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:36:44.192919 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:36:44.198488 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:36:44.218642 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:36:44.226537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:36:44.264421 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:44.264466 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:44.269474 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:44.292326 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:44.300797 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:36:44.312654 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:44.322125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:36:44.340850 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:36:44.375391 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:44.400512 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:36:44.424679 systemd-networkd[870]: lo: Link UP Apr 30 00:36:44.424686 systemd-networkd[870]: lo: Gained carrier Apr 30 00:36:44.427283 systemd-networkd[870]: Enumeration completed Apr 30 00:36:44.428285 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:44.428288 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:44.429065 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:36:44.435307 systemd[1]: Reached target network.target - Network. Apr 30 00:36:44.520391 kernel: mlx5_core 282c:00:02.0 enP10284s1: Link up Apr 30 00:36:44.557852 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: Data path switched to VF: enP10284s1 Apr 30 00:36:44.557522 systemd-networkd[870]: enP10284s1: Link UP Apr 30 00:36:44.557599 systemd-networkd[870]: eth0: Link UP Apr 30 00:36:44.557744 systemd-networkd[870]: eth0: Gained carrier Apr 30 00:36:44.557752 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:44.568682 systemd-networkd[870]: enP10284s1: Gained carrier Apr 30 00:36:44.587425 systemd-networkd[870]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:36:45.351594 ignition[833]: Ignition 2.19.0 Apr 30 00:36:45.353398 ignition[833]: Stage: fetch-offline Apr 30 00:36:45.355870 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:45.353440 ignition[833]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.353449 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.353596 ignition[833]: parsed url from cmdline: "" Apr 30 00:36:45.353602 ignition[833]: no config URL provided Apr 30 00:36:45.353610 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:45.381637 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:36:45.353622 ignition[833]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:45.353629 ignition[833]: failed to fetch config: resource requires networking Apr 30 00:36:45.353809 ignition[833]: Ignition finished successfully Apr 30 00:36:45.397693 ignition[880]: Ignition 2.19.0 Apr 30 00:36:45.397703 ignition[880]: Stage: fetch Apr 30 00:36:45.397927 ignition[880]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.397936 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.398051 ignition[880]: parsed url from cmdline: "" Apr 30 00:36:45.398067 ignition[880]: no config URL provided Apr 30 00:36:45.398072 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:45.398079 ignition[880]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:45.398100 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 00:36:45.487311 ignition[880]: GET result: OK Apr 30 00:36:45.487387 ignition[880]: config has been read from IMDS userdata Apr 30 00:36:45.487430 ignition[880]: parsing config with SHA512: d7ff489ef25ee734b9bc57b73f02a92d3dc592b731c10fa13aaec699c5590d6a7e08e9719c21091921be94f07c7fca2000bf3d48d37c9d41cf2506ea1467d6ef Apr 30 00:36:45.491042 unknown[880]: fetched base config from "system" Apr 30 00:36:45.491442 ignition[880]: fetch: fetch complete Apr 30 00:36:45.491049 unknown[880]: fetched base config from "system" Apr 30 00:36:45.491446 ignition[880]: fetch: fetch passed Apr 30 00:36:45.491054 unknown[880]: fetched user config from "azure" Apr 30 00:36:45.491484 ignition[880]: Ignition finished successfully Apr 30 00:36:45.496612 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:36:45.519585 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:36:45.543051 ignition[887]: Ignition 2.19.0 Apr 30 00:36:45.543081 ignition[887]: Stage: kargs Apr 30 00:36:45.547868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:36:45.543253 ignition[887]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.543266 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.544124 ignition[887]: kargs: kargs passed Apr 30 00:36:45.544166 ignition[887]: Ignition finished successfully Apr 30 00:36:45.570603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:36:45.588032 ignition[893]: Ignition 2.19.0 Apr 30 00:36:45.588039 ignition[893]: Stage: disks Apr 30 00:36:45.593285 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:36:45.588242 ignition[893]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.600103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:45.588251 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.608599 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:36:45.589937 ignition[893]: disks: disks passed Apr 30 00:36:45.619931 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:36:45.589994 ignition[893]: Ignition finished successfully Apr 30 00:36:45.629567 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:36:45.641112 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:36:45.671631 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:36:45.745755 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 00:36:45.754838 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:36:45.770555 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:36:45.789565 systemd-networkd[870]: enP10284s1: Gained IPv6LL Apr 30 00:36:45.825382 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:36:45.826291 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:36:45.830858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:36:45.880432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:45.890479 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:36:45.908933 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:36:45.945316 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Apr 30 00:36:45.945341 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:45.945352 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:45.945395 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:45.915340 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:36:45.968163 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:45.915383 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:45.929376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:36:45.970532 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:45.994579 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:36:46.108496 systemd-networkd[870]: eth0: Gained IPv6LL Apr 30 00:36:46.452087 coreos-metadata[914]: Apr 30 00:36:46.452 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:36:46.462944 coreos-metadata[914]: Apr 30 00:36:46.462 INFO Fetch successful Apr 30 00:36:46.468998 coreos-metadata[914]: Apr 30 00:36:46.468 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:36:46.480943 coreos-metadata[914]: Apr 30 00:36:46.479 INFO Fetch successful Apr 30 00:36:46.480943 coreos-metadata[914]: Apr 30 00:36:46.479 INFO wrote hostname ci-4081.3.3-a-2b660cb835 to /sysroot/etc/hostname Apr 30 00:36:46.480974 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:46.917868 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:36:46.984992 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:36:47.010882 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:36:47.043285 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:36:47.753723 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:47.770550 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:36:47.777970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:36:47.803458 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:36:47.809423 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:47.825357 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:36:47.839463 ignition[1033]: INFO : Ignition 2.19.0 Apr 30 00:36:47.839463 ignition[1033]: INFO : Stage: mount Apr 30 00:36:47.847192 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:47.847192 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:47.847192 ignition[1033]: INFO : mount: mount passed Apr 30 00:36:47.847192 ignition[1033]: INFO : Ignition finished successfully Apr 30 00:36:47.847283 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:36:47.877443 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:36:47.891584 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:47.918415 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1041) Apr 30 00:36:47.918451 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:47.924716 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:47.929204 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:47.935378 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:47.937574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:47.963520 ignition[1059]: INFO : Ignition 2.19.0 Apr 30 00:36:47.963520 ignition[1059]: INFO : Stage: files Apr 30 00:36:47.971783 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:47.971783 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:47.971783 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:36:48.002249 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:36:48.002249 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:36:48.117198 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:36:48.124632 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:36:48.124632 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:36:48.117568 unknown[1059]: wrote ssh authorized keys file for user: core Apr 30 00:36:48.146984 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:36:48.156919 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:36:48.156919 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:48.156919 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:36:48.257039 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:36:48.495451 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:48.495451 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:36:48.921656 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:36:49.133679 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:49.133679 ignition[1059]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 00:36:49.203454 ignition[1059]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: files passed Apr 30 00:36:49.216689 ignition[1059]: INFO : Ignition finished successfully Apr 30 00:36:49.217395 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:36:49.260675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:36:49.279545 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:36:49.379013 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:49.301795 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:36:49.394433 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:49.394433 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:49.301883 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:36:49.346311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:49.358310 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:36:49.387591 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:36:49.433558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:36:49.433655 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:36:49.444273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:36:49.456260 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:36:49.467016 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:36:49.492588 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:36:49.503766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:49.527576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:36:49.544056 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:49.555934 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:49.568965 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:36:49.578284 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:36:49.578477 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:49.596064 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:36:49.601680 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:36:49.611308 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:36:49.621604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:49.633359 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:49.644976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:36:49.656071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:49.667675 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:36:49.679442 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:36:49.689677 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:36:49.699332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:36:49.699521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:49.713818 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:49.724743 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:49.736398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:36:49.742544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:49.749443 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:36:49.749615 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:49.767169 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:36:49.767332 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:49.779053 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:36:49.779193 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:36:49.789472 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:36:49.789614 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:49.820474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:36:49.836772 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:36:49.837012 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:49.862935 ignition[1110]: INFO : Ignition 2.19.0 Apr 30 00:36:49.862935 ignition[1110]: INFO : Stage: umount Apr 30 00:36:49.862935 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:49.862935 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:49.862935 ignition[1110]: INFO : umount: umount passed Apr 30 00:36:49.862935 ignition[1110]: INFO : Ignition finished successfully Apr 30 00:36:49.870212 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:36:49.883807 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:36:49.884001 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:49.894988 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:36:49.895135 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:49.912778 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:36:49.912878 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:36:49.921726 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:36:49.921808 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:36:49.934776 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:36:49.934837 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:36:49.943668 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:36:49.943710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:36:49.954600 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:36:49.954641 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:36:49.964815 systemd[1]: Stopped target network.target - Network. Apr 30 00:36:49.974616 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:36:49.974663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:49.985517 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:36:49.996028 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:36:50.000707 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:50.007157 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:36:50.018384 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:36:50.028838 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:36:50.028890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:50.038742 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:36:50.038780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:50.049712 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:36:50.049763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:36:50.060909 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:36:50.060957 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:50.071450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:36:50.081611 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:36:50.092113 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:36:50.096461 systemd-networkd[870]: eth0: DHCPv6 lease lost Apr 30 00:36:50.098070 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:36:50.099607 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:36:50.111102 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:36:50.111188 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:36:50.305212 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: Data path switched from VF: enP10284s1 Apr 30 00:36:50.123442 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:36:50.123493 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:50.147527 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:36:50.157125 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:36:50.157192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:50.168040 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:36:50.168086 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:50.178277 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:36:50.178321 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:50.188332 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:36:50.188411 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:50.200014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:50.238586 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:36:50.238811 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:50.249041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:36:50.249085 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:50.260705 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:36:50.260748 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:50.270546 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:36:50.270593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:50.286967 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:36:50.287015 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:50.315447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:50.315503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:50.354595 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:36:50.365983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:36:50.366063 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:50.378051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:50.378095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:50.389584 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:36:50.389662 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:36:50.402120 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:36:50.402227 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:36:50.910725 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:36:50.910842 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:36:50.920850 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:36:50.930647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:36:50.930725 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:50.956573 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:36:53.147565 systemd[1]: Switching root. Apr 30 00:36:53.179994 systemd-journald[217]: Journal stopped Apr 30 00:36:40.281908 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:36:40.281928 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:36:40.281937 kernel: KASLR enabled Apr 30 00:36:40.281943 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 30 00:36:40.281950 kernel: printk: bootconsole [pl11] enabled Apr 30 00:36:40.281955 kernel: efi: EFI v2.7 by EDK II Apr 30 00:36:40.281963 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 30 00:36:40.281969 kernel: random: crng init done Apr 30 00:36:40.281975 kernel: ACPI: Early table checksum verification disabled Apr 30 00:36:40.281981 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 30 00:36:40.281988 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.281994 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282001 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 30 00:36:40.282007 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282015 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282021 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282028 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282035 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282042 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282048 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 30 00:36:40.282055 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:40.282061 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 30 00:36:40.282067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 30 00:36:40.282074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 30 00:36:40.282080 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 30 00:36:40.282086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 30 00:36:40.282093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 30 00:36:40.282099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 30 00:36:40.282107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 30 00:36:40.282113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 30 00:36:40.282120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 30 00:36:40.282126 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 30 00:36:40.282132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 30 00:36:40.282139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 30 00:36:40.282145 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 30 00:36:40.282151 kernel: Zone ranges: Apr 30 00:36:40.282157 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 30 00:36:40.282164 kernel: DMA32 empty Apr 30 00:36:40.282170 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:40.282176 kernel: Movable zone start for each node Apr 30 00:36:40.282186 kernel: Early memory node ranges Apr 30 00:36:40.282193 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 30 00:36:40.282200 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 30 00:36:40.282207 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 30 00:36:40.282213 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 30 00:36:40.282221 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 30 00:36:40.282228 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 30 00:36:40.282235 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:40.282242 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 30 00:36:40.282249 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 30 00:36:40.282255 kernel: psci: probing for conduit method from ACPI. Apr 30 00:36:40.282262 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:36:40.282269 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:36:40.282276 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 30 00:36:40.282282 kernel: psci: SMC Calling Convention v1.4 Apr 30 00:36:40.282289 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 30 00:36:40.282311 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 30 00:36:40.282320 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:36:40.282327 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:36:40.282334 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:36:40.282341 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:36:40.282348 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:36:40.282355 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:36:40.282361 kernel: CPU features: detected: Spectre-BHB Apr 30 00:36:40.282368 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:36:40.282375 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:36:40.282381 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:36:40.282388 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 30 00:36:40.282396 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:36:40.282403 kernel: alternatives: applying boot alternatives Apr 30 00:36:40.282411 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:40.282418 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:36:40.282425 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:36:40.282432 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:36:40.282439 kernel: Fallback order for Node 0: 0 Apr 30 00:36:40.282446 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 30 00:36:40.282452 kernel: Policy zone: Normal Apr 30 00:36:40.282459 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:36:40.282466 kernel: software IO TLB: area num 2. Apr 30 00:36:40.282474 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 30 00:36:40.282481 kernel: Memory: 3982692K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211468K reserved, 0K cma-reserved) Apr 30 00:36:40.282488 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:36:40.282494 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:36:40.282502 kernel: rcu: RCU event tracing is enabled. Apr 30 00:36:40.282509 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:36:40.282516 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:36:40.282523 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:36:40.282529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:36:40.282536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:36:40.282543 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:36:40.282551 kernel: GICv3: 960 SPIs implemented Apr 30 00:36:40.282558 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:36:40.282564 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:36:40.282571 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:36:40.282578 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 30 00:36:40.282584 kernel: ITS: No ITS available, not enabling LPIs Apr 30 00:36:40.282591 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:36:40.282598 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:40.282605 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:36:40.282612 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:36:40.282619 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:36:40.282627 kernel: Console: colour dummy device 80x25 Apr 30 00:36:40.282634 kernel: printk: console [tty1] enabled Apr 30 00:36:40.282641 kernel: ACPI: Core revision 20230628 Apr 30 00:36:40.282648 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:36:40.282655 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:36:40.282662 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:36:40.282669 kernel: landlock: Up and running. Apr 30 00:36:40.282676 kernel: SELinux: Initializing. Apr 30 00:36:40.282683 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.282690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.282698 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:40.282705 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:40.282712 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 30 00:36:40.282719 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Apr 30 00:36:40.282726 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 00:36:40.282733 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:36:40.282740 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:36:40.282753 kernel: Remapping and enabling EFI services. Apr 30 00:36:40.282760 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:36:40.282767 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:36:40.282774 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 30 00:36:40.282783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:40.282790 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:36:40.282797 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:36:40.282804 kernel: SMP: Total of 2 processors activated. Apr 30 00:36:40.282812 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:36:40.282820 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 30 00:36:40.282828 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:36:40.282835 kernel: CPU features: detected: CRC32 instructions Apr 30 00:36:40.282842 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:36:40.282849 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:36:40.282857 kernel: CPU features: detected: Privileged Access Never Apr 30 00:36:40.282864 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:36:40.282871 kernel: alternatives: applying system-wide alternatives Apr 30 00:36:40.282878 kernel: devtmpfs: initialized Apr 30 00:36:40.282887 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:36:40.282894 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:36:40.282902 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:36:40.282909 kernel: SMBIOS 3.1.0 present. Apr 30 00:36:40.282916 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Apr 30 00:36:40.282923 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:36:40.282931 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:36:40.282938 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:36:40.282945 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:36:40.282954 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:36:40.282961 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 30 00:36:40.282969 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:36:40.282976 kernel: cpuidle: using governor menu Apr 30 00:36:40.282983 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:36:40.282990 kernel: ASID allocator initialised with 32768 entries Apr 30 00:36:40.282998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:36:40.283005 kernel: Serial: AMBA PL011 UART driver Apr 30 00:36:40.283012 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:36:40.283021 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:36:40.283028 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:36:40.283035 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:36:40.283043 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:36:40.283050 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:36:40.283057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:36:40.283064 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:36:40.283072 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:36:40.283079 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:36:40.283087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:36:40.283095 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:36:40.283102 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:36:40.283109 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:36:40.283116 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:36:40.283124 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:36:40.283131 kernel: ACPI: Interpreter enabled Apr 30 00:36:40.283138 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:36:40.283145 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:36:40.283154 kernel: printk: console [ttyAMA0] enabled Apr 30 00:36:40.283161 kernel: printk: bootconsole [pl11] disabled Apr 30 00:36:40.283168 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 30 00:36:40.283175 kernel: iommu: Default domain type: Translated Apr 30 00:36:40.283183 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:36:40.283190 kernel: efivars: Registered efivars operations Apr 30 00:36:40.283197 kernel: vgaarb: loaded Apr 30 00:36:40.283204 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:36:40.283211 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:36:40.283220 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:36:40.283227 kernel: pnp: PnP ACPI init Apr 30 00:36:40.283234 kernel: pnp: PnP ACPI: found 0 devices Apr 30 00:36:40.283241 kernel: NET: Registered PF_INET protocol family Apr 30 00:36:40.283249 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:36:40.283256 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:36:40.283264 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:36:40.283271 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:36:40.283278 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:36:40.283287 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:36:40.288349 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.288363 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:40.288371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:36:40.288379 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:36:40.288386 kernel: kvm [1]: HYP mode not available Apr 30 00:36:40.288394 kernel: Initialise system trusted keyrings Apr 30 00:36:40.288402 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:36:40.288409 kernel: Key type asymmetric registered Apr 30 00:36:40.288421 kernel: Asymmetric key parser 'x509' registered Apr 30 00:36:40.288429 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:36:40.288437 kernel: io scheduler mq-deadline registered Apr 30 00:36:40.288444 kernel: io scheduler kyber registered Apr 30 00:36:40.288451 kernel: io scheduler bfq registered Apr 30 00:36:40.288459 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:36:40.288466 kernel: thunder_xcv, ver 1.0 Apr 30 00:36:40.288473 kernel: thunder_bgx, ver 1.0 Apr 30 00:36:40.288480 kernel: nicpf, ver 1.0 Apr 30 00:36:40.288487 kernel: nicvf, ver 1.0 Apr 30 00:36:40.288622 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:36:40.288695 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:36:39 UTC (1745973399) Apr 30 00:36:40.288705 kernel: efifb: probing for efifb Apr 30 00:36:40.288713 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 00:36:40.288721 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 00:36:40.288741 kernel: efifb: scrolling: redraw Apr 30 00:36:40.288749 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 00:36:40.288758 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:36:40.288766 kernel: fb0: EFI VGA frame buffer device Apr 30 00:36:40.288773 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 30 00:36:40.288781 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:36:40.288788 kernel: No ACPI PMU IRQ for CPU0 Apr 30 00:36:40.288795 kernel: No ACPI PMU IRQ for CPU1 Apr 30 00:36:40.288802 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 30 00:36:40.288810 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:36:40.288817 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:36:40.288826 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:36:40.288834 kernel: Segment Routing with IPv6 Apr 30 00:36:40.288841 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:36:40.288848 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:36:40.288855 kernel: Key type dns_resolver registered Apr 30 00:36:40.288869 kernel: registered taskstats version 1 Apr 30 00:36:40.288876 kernel: Loading compiled-in X.509 certificates Apr 30 00:36:40.288884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:36:40.288891 kernel: Key type .fscrypt registered Apr 30 00:36:40.288900 kernel: Key type fscrypt-provisioning registered Apr 30 00:36:40.288908 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:36:40.288915 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:36:40.288922 kernel: ima: No architecture policies found Apr 30 00:36:40.288930 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:36:40.288937 kernel: clk: Disabling unused clocks Apr 30 00:36:40.288944 kernel: Freeing unused kernel memory: 39424K Apr 30 00:36:40.288952 kernel: Run /init as init process Apr 30 00:36:40.288959 kernel: with arguments: Apr 30 00:36:40.288967 kernel: /init Apr 30 00:36:40.288974 kernel: with environment: Apr 30 00:36:40.288981 kernel: HOME=/ Apr 30 00:36:40.288989 kernel: TERM=linux Apr 30 00:36:40.288996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:36:40.289006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:36:40.289015 systemd[1]: Detected virtualization microsoft. Apr 30 00:36:40.289023 systemd[1]: Detected architecture arm64. Apr 30 00:36:40.289032 systemd[1]: Running in initrd. Apr 30 00:36:40.289040 systemd[1]: No hostname configured, using default hostname. Apr 30 00:36:40.289048 systemd[1]: Hostname set to . Apr 30 00:36:40.289056 systemd[1]: Initializing machine ID from random generator. Apr 30 00:36:40.289064 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:36:40.289072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:40.289080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:40.289088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:36:40.289098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:36:40.289106 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:36:40.289114 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:36:40.289123 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:36:40.289131 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:36:40.289140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:40.289148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:40.289157 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:36:40.289165 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:36:40.289178 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:36:40.289186 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:36:40.289194 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:40.289202 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:40.289210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:36:40.289218 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:36:40.289228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:40.289236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:40.289244 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:40.289252 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:36:40.289260 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:36:40.289268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:36:40.289276 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:36:40.289284 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:36:40.289314 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:36:40.289326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:36:40.289350 systemd-journald[217]: Collecting audit messages is disabled. Apr 30 00:36:40.289369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:40.289377 systemd-journald[217]: Journal started Apr 30 00:36:40.289397 systemd-journald[217]: Runtime Journal (/run/log/journal/c03311371a094c5189fbc5a3851284d6) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:36:40.289980 systemd-modules-load[218]: Inserted module 'overlay' Apr 30 00:36:40.305676 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:36:40.308951 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:40.319048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:40.353707 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:36:40.353730 kernel: Bridge firewalling registered Apr 30 00:36:40.342786 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:36:40.357952 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 30 00:36:40.358882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:40.368668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:40.394527 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:40.414488 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:36:40.430350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:36:40.438424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:36:40.444899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:40.477533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:40.485665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:40.498752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:40.526427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:36:40.533457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:36:40.552768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:36:40.567606 dracut-cmdline[251]: dracut-dracut-053 Apr 30 00:36:40.567606 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:40.614200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:40.623570 systemd-resolved[252]: Positive Trust Anchors: Apr 30 00:36:40.623579 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:36:40.623610 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:36:40.625871 systemd-resolved[252]: Defaulting to hostname 'linux'. Apr 30 00:36:40.634148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:36:40.710572 kernel: SCSI subsystem initialized Apr 30 00:36:40.710594 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:36:40.645044 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:40.731312 kernel: iscsi: registered transport (tcp) Apr 30 00:36:40.747100 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:36:40.747141 kernel: QLogic iSCSI HBA Driver Apr 30 00:36:40.784716 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:40.799488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:36:40.828330 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:36:40.828392 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:36:40.838531 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:36:40.887323 kernel: raid6: neonx8 gen() 15761 MB/s Apr 30 00:36:40.907313 kernel: raid6: neonx4 gen() 15659 MB/s Apr 30 00:36:40.927307 kernel: raid6: neonx2 gen() 13234 MB/s Apr 30 00:36:40.948306 kernel: raid6: neonx1 gen() 10498 MB/s Apr 30 00:36:40.968307 kernel: raid6: int64x8 gen() 6963 MB/s Apr 30 00:36:40.988305 kernel: raid6: int64x4 gen() 7334 MB/s Apr 30 00:36:41.009305 kernel: raid6: int64x2 gen() 6127 MB/s Apr 30 00:36:41.032657 kernel: raid6: int64x1 gen() 5059 MB/s Apr 30 00:36:41.032668 kernel: raid6: using algorithm neonx8 gen() 15761 MB/s Apr 30 00:36:41.056777 kernel: raid6: .... xor() 11938 MB/s, rmw enabled Apr 30 00:36:41.056794 kernel: raid6: using neon recovery algorithm Apr 30 00:36:41.069108 kernel: xor: measuring software checksum speed Apr 30 00:36:41.069132 kernel: 8regs : 19793 MB/sec Apr 30 00:36:41.072615 kernel: 32regs : 19613 MB/sec Apr 30 00:36:41.076260 kernel: arm64_neon : 26954 MB/sec Apr 30 00:36:41.080367 kernel: xor: using function: arm64_neon (26954 MB/sec) Apr 30 00:36:41.131317 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:36:41.141967 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:41.157453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:41.179480 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 30 00:36:41.185578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:41.206423 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:36:41.230572 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Apr 30 00:36:41.255958 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:41.274496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:36:41.310075 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:41.329746 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:36:41.354344 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:41.371266 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:41.384418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:41.408209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:36:41.429323 kernel: hv_vmbus: Vmbus version:5.3 Apr 30 00:36:41.430474 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:36:41.458009 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 00:36:41.458067 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 00:36:41.451105 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:41.499323 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 00:36:41.499349 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 00:36:41.499361 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 00:36:41.499371 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 00:36:41.504320 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 00:36:41.504355 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 00:36:41.543222 kernel: PTP clock support registered Apr 30 00:36:41.543239 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 00:36:41.543249 kernel: hv_vmbus: registering driver hv_utils Apr 30 00:36:41.512496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:41.465236 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 00:36:41.471173 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 00:36:41.471188 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 00:36:41.471196 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 00:36:41.471206 kernel: scsi host0: storvsc_host_t Apr 30 00:36:41.471334 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 00:36:41.471356 systemd-journald[217]: Time jumped backwards, rotating. Apr 30 00:36:41.471409 kernel: scsi host1: storvsc_host_t Apr 30 00:36:41.512657 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:41.486356 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: VF slot 1 added Apr 30 00:36:41.527855 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:41.511190 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 00:36:41.542593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:41.542797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:41.446013 systemd-resolved[252]: Clock change detected. Flushing caches. Apr 30 00:36:41.446341 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:41.467132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:41.579678 kernel: hv_vmbus: registering driver hv_pci Apr 30 00:36:41.579704 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 00:36:41.613755 kernel: hv_pci ae81ee55-282c-4f2c-94a1-aa25a61e5e7b: PCI VMBus probing: Using version 0x10004 Apr 30 00:36:41.710614 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:36:41.710640 kernel: hv_pci ae81ee55-282c-4f2c-94a1-aa25a61e5e7b: PCI host bridge to bus 282c:00 Apr 30 00:36:41.710760 kernel: pci_bus 282c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 30 00:36:41.710873 kernel: pci_bus 282c:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 00:36:41.710951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 00:36:41.711048 kernel: pci 282c:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 30 00:36:41.711190 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 00:36:41.711287 kernel: pci 282c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:41.711413 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 00:36:41.711508 kernel: pci 282c:00:02.0: enabling Extended Tags Apr 30 00:36:41.711595 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:36:41.711692 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 00:36:41.711797 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 00:36:41.711882 kernel: pci 282c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 282c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 30 00:36:41.711989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:41.711999 kernel: pci_bus 282c:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 00:36:41.712081 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:36:41.712164 kernel: pci 282c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:41.511453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:41.531583 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:41.591847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:41.769478 kernel: mlx5_core 282c:00:02.0: enabling device (0000 -> 0002) Apr 30 00:36:41.986670 kernel: mlx5_core 282c:00:02.0: firmware version: 16.30.1284 Apr 30 00:36:41.986808 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: VF registering: eth1 Apr 30 00:36:41.986915 kernel: mlx5_core 282c:00:02.0 eth1: joined to eth0 Apr 30 00:36:41.987013 kernel: mlx5_core 282c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 30 00:36:41.996392 kernel: mlx5_core 282c:00:02.0 enP10284s1: renamed from eth1 Apr 30 00:36:42.213930 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 00:36:42.300399 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (501) Apr 30 00:36:42.317509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:36:42.330196 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 00:36:42.386406 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (497) Apr 30 00:36:42.399688 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 00:36:42.406955 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 00:36:42.439638 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:36:42.465395 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:42.473392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:43.482013 disk-uuid[600]: The operation has completed successfully. Apr 30 00:36:43.487190 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:43.534274 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:36:43.534398 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:36:43.566519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:36:43.581632 sh[686]: Success Apr 30 00:36:43.617401 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:36:43.800457 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:36:43.821799 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:36:43.827727 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:36:43.866486 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:36:43.866551 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:43.873723 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:36:43.878641 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:36:43.882843 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:36:44.192919 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:36:44.198488 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:36:44.218642 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:36:44.226537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:36:44.264421 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:44.264466 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:44.269474 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:44.292326 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:44.300797 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:36:44.312654 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:44.322125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:36:44.340850 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:36:44.375391 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:44.400512 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:36:44.424679 systemd-networkd[870]: lo: Link UP Apr 30 00:36:44.424686 systemd-networkd[870]: lo: Gained carrier Apr 30 00:36:44.427283 systemd-networkd[870]: Enumeration completed Apr 30 00:36:44.428285 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:44.428288 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:44.429065 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:36:44.435307 systemd[1]: Reached target network.target - Network. Apr 30 00:36:44.520391 kernel: mlx5_core 282c:00:02.0 enP10284s1: Link up Apr 30 00:36:44.557852 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: Data path switched to VF: enP10284s1 Apr 30 00:36:44.557522 systemd-networkd[870]: enP10284s1: Link UP Apr 30 00:36:44.557599 systemd-networkd[870]: eth0: Link UP Apr 30 00:36:44.557744 systemd-networkd[870]: eth0: Gained carrier Apr 30 00:36:44.557752 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:44.568682 systemd-networkd[870]: enP10284s1: Gained carrier Apr 30 00:36:44.587425 systemd-networkd[870]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:36:45.351594 ignition[833]: Ignition 2.19.0 Apr 30 00:36:45.353398 ignition[833]: Stage: fetch-offline Apr 30 00:36:45.355870 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:45.353440 ignition[833]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.353449 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.353596 ignition[833]: parsed url from cmdline: "" Apr 30 00:36:45.353602 ignition[833]: no config URL provided Apr 30 00:36:45.353610 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:45.381637 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:36:45.353622 ignition[833]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:45.353629 ignition[833]: failed to fetch config: resource requires networking Apr 30 00:36:45.353809 ignition[833]: Ignition finished successfully Apr 30 00:36:45.397693 ignition[880]: Ignition 2.19.0 Apr 30 00:36:45.397703 ignition[880]: Stage: fetch Apr 30 00:36:45.397927 ignition[880]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.397936 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.398051 ignition[880]: parsed url from cmdline: "" Apr 30 00:36:45.398067 ignition[880]: no config URL provided Apr 30 00:36:45.398072 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:45.398079 ignition[880]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:45.398100 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 00:36:45.487311 ignition[880]: GET result: OK Apr 30 00:36:45.487387 ignition[880]: config has been read from IMDS userdata Apr 30 00:36:45.487430 ignition[880]: parsing config with SHA512: d7ff489ef25ee734b9bc57b73f02a92d3dc592b731c10fa13aaec699c5590d6a7e08e9719c21091921be94f07c7fca2000bf3d48d37c9d41cf2506ea1467d6ef Apr 30 00:36:45.491042 unknown[880]: fetched base config from "system" Apr 30 00:36:45.491442 ignition[880]: fetch: fetch complete Apr 30 00:36:45.491049 unknown[880]: fetched base config from "system" Apr 30 00:36:45.491446 ignition[880]: fetch: fetch passed Apr 30 00:36:45.491054 unknown[880]: fetched user config from "azure" Apr 30 00:36:45.491484 ignition[880]: Ignition finished successfully Apr 30 00:36:45.496612 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:36:45.519585 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:36:45.543051 ignition[887]: Ignition 2.19.0 Apr 30 00:36:45.543081 ignition[887]: Stage: kargs Apr 30 00:36:45.547868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:36:45.543253 ignition[887]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.543266 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.544124 ignition[887]: kargs: kargs passed Apr 30 00:36:45.544166 ignition[887]: Ignition finished successfully Apr 30 00:36:45.570603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:36:45.588032 ignition[893]: Ignition 2.19.0 Apr 30 00:36:45.588039 ignition[893]: Stage: disks Apr 30 00:36:45.593285 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:36:45.588242 ignition[893]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:45.600103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:45.588251 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:45.608599 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:36:45.589937 ignition[893]: disks: disks passed Apr 30 00:36:45.619931 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:36:45.589994 ignition[893]: Ignition finished successfully Apr 30 00:36:45.629567 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:36:45.641112 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:36:45.671631 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:36:45.745755 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 00:36:45.754838 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:36:45.770555 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:36:45.789565 systemd-networkd[870]: enP10284s1: Gained IPv6LL Apr 30 00:36:45.825382 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:36:45.826291 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:36:45.830858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:36:45.880432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:45.890479 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:36:45.908933 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:36:45.945316 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Apr 30 00:36:45.945341 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:45.945352 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:45.945395 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:45.915340 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:36:45.968163 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:45.915383 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:45.929376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:36:45.970532 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:45.994579 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:36:46.108496 systemd-networkd[870]: eth0: Gained IPv6LL Apr 30 00:36:46.452087 coreos-metadata[914]: Apr 30 00:36:46.452 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:36:46.462944 coreos-metadata[914]: Apr 30 00:36:46.462 INFO Fetch successful Apr 30 00:36:46.468998 coreos-metadata[914]: Apr 30 00:36:46.468 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:36:46.480943 coreos-metadata[914]: Apr 30 00:36:46.479 INFO Fetch successful Apr 30 00:36:46.480943 coreos-metadata[914]: Apr 30 00:36:46.479 INFO wrote hostname ci-4081.3.3-a-2b660cb835 to /sysroot/etc/hostname Apr 30 00:36:46.480974 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:46.917868 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:36:46.984992 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:36:47.010882 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:36:47.043285 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:36:47.753723 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:47.770550 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:36:47.777970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:36:47.803458 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:36:47.809423 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:47.825357 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:36:47.839463 ignition[1033]: INFO : Ignition 2.19.0 Apr 30 00:36:47.839463 ignition[1033]: INFO : Stage: mount Apr 30 00:36:47.847192 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:47.847192 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:47.847192 ignition[1033]: INFO : mount: mount passed Apr 30 00:36:47.847192 ignition[1033]: INFO : Ignition finished successfully Apr 30 00:36:47.847283 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:36:47.877443 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:36:47.891584 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:47.918415 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1041) Apr 30 00:36:47.918451 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:47.924716 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:47.929204 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:47.935378 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:47.937574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:47.963520 ignition[1059]: INFO : Ignition 2.19.0 Apr 30 00:36:47.963520 ignition[1059]: INFO : Stage: files Apr 30 00:36:47.971783 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:47.971783 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:47.971783 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:36:48.002249 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:36:48.002249 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:36:48.117198 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:36:48.124632 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:36:48.124632 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:36:48.117568 unknown[1059]: wrote ssh authorized keys file for user: core Apr 30 00:36:48.146984 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:36:48.156919 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:36:48.156919 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:48.156919 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:36:48.257039 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:36:48.495451 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:48.495451 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:48.515183 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:36:48.921656 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:36:49.133679 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:49.133679 ignition[1059]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 00:36:49.203454 ignition[1059]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:49.216689 ignition[1059]: INFO : files: files passed Apr 30 00:36:49.216689 ignition[1059]: INFO : Ignition finished successfully Apr 30 00:36:49.217395 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:36:49.260675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:36:49.279545 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:36:49.379013 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:49.301795 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:36:49.394433 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:49.394433 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:49.301883 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:36:49.346311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:49.358310 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:36:49.387591 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:36:49.433558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:36:49.433655 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:36:49.444273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:36:49.456260 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:36:49.467016 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:36:49.492588 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:36:49.503766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:49.527576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:36:49.544056 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:49.555934 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:49.568965 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:36:49.578284 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:36:49.578477 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:49.596064 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:36:49.601680 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:36:49.611308 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:36:49.621604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:49.633359 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:49.644976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:36:49.656071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:49.667675 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:36:49.679442 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:36:49.689677 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:36:49.699332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:36:49.699521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:49.713818 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:49.724743 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:49.736398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:36:49.742544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:49.749443 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:36:49.749615 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:49.767169 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:36:49.767332 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:49.779053 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:36:49.779193 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:36:49.789472 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:36:49.789614 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:49.820474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:36:49.836772 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:36:49.837012 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:49.862935 ignition[1110]: INFO : Ignition 2.19.0 Apr 30 00:36:49.862935 ignition[1110]: INFO : Stage: umount Apr 30 00:36:49.862935 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:49.862935 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:49.862935 ignition[1110]: INFO : umount: umount passed Apr 30 00:36:49.862935 ignition[1110]: INFO : Ignition finished successfully Apr 30 00:36:49.870212 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:36:49.883807 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:36:49.884001 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:49.894988 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:36:49.895135 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:49.912778 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:36:49.912878 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:36:49.921726 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:36:49.921808 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:36:49.934776 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:36:49.934837 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:36:49.943668 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:36:49.943710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:36:49.954600 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:36:49.954641 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:36:49.964815 systemd[1]: Stopped target network.target - Network. Apr 30 00:36:49.974616 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:36:49.974663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:49.985517 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:36:49.996028 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:36:50.000707 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:50.007157 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:36:50.018384 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:36:50.028838 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:36:50.028890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:50.038742 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:36:50.038780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:50.049712 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:36:50.049763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:36:50.060909 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:36:50.060957 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:50.071450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:36:50.081611 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:36:50.092113 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:36:50.096461 systemd-networkd[870]: eth0: DHCPv6 lease lost Apr 30 00:36:50.098070 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:36:50.099607 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:36:50.111102 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:36:50.111188 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:36:50.305212 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: Data path switched from VF: enP10284s1 Apr 30 00:36:50.123442 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:36:50.123493 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:50.147527 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:36:50.157125 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:36:50.157192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:50.168040 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:36:50.168086 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:50.178277 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:36:50.178321 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:50.188332 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:36:50.188411 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:50.200014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:50.238586 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:36:50.238811 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:50.249041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:36:50.249085 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:50.260705 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:36:50.260748 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:50.270546 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:36:50.270593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:50.286967 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:36:50.287015 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:50.315447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:50.315503 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:50.354595 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:36:50.365983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:36:50.366063 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:50.378051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:50.378095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:50.389584 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:36:50.389662 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:36:50.402120 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:36:50.402227 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:36:50.910725 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:36:50.910842 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:36:50.920850 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:36:50.930647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:36:50.930725 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:50.956573 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:36:53.147565 systemd[1]: Switching root. Apr 30 00:36:53.179994 systemd-journald[217]: Journal stopped Apr 30 00:37:03.951025 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 30 00:37:03.951048 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:37:03.951058 kernel: SELinux: policy capability open_perms=1 Apr 30 00:37:03.951068 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:37:03.951076 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:37:03.951086 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:37:03.951094 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:37:03.951102 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:37:03.951110 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:37:03.951118 systemd[1]: Successfully loaded SELinux policy in 260.072ms. Apr 30 00:37:03.951129 kernel: audit: type=1403 audit(1745973419.172:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:37:03.951137 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.692ms. Apr 30 00:37:03.951147 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:37:03.951156 systemd[1]: Detected virtualization microsoft. Apr 30 00:37:03.951165 systemd[1]: Detected architecture arm64. Apr 30 00:37:03.951176 systemd[1]: Detected first boot. Apr 30 00:37:03.951185 systemd[1]: Hostname set to . Apr 30 00:37:03.951194 systemd[1]: Initializing machine ID from random generator. Apr 30 00:37:03.951203 zram_generator::config[1169]: No configuration found. Apr 30 00:37:03.951213 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:37:03.951222 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:37:03.951232 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:37:03.951242 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:37:03.951251 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:37:03.951260 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:37:03.951269 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:37:03.951279 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:37:03.951289 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:37:03.951300 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:37:03.951309 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:37:03.951318 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:37:03.951327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:37:03.951336 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:37:03.951345 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:37:03.951354 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:37:03.951378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:37:03.951388 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:37:03.951399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:37:03.951408 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:37:03.951417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:37:03.951429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:37:03.951438 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:37:03.951447 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:37:03.951457 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:37:03.951467 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:37:03.951477 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:37:03.951486 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:37:03.951496 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:37:03.951505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:37:03.951515 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:37:03.951524 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:37:03.951535 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:37:03.951545 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:37:03.951554 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:37:03.951563 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:37:03.951573 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:37:03.951582 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:37:03.951593 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:37:03.951603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:37:03.951613 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:37:03.951622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:37:03.951632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:37:03.951641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:37:03.951650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:37:03.951660 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:37:03.951669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:37:03.951680 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:37:03.951690 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:37:03.951701 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:37:03.951710 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:37:03.951719 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:37:03.951729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:37:03.951738 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:37:03.951749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:37:03.951774 systemd-journald[1265]: Collecting audit messages is disabled. Apr 30 00:37:03.951793 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:37:03.951803 kernel: loop: module loaded Apr 30 00:37:03.951812 systemd-journald[1265]: Journal started Apr 30 00:37:03.951833 systemd-journald[1265]: Runtime Journal (/run/log/journal/6165ae8b863c4fb6b6d5ec985b44540d) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:37:03.969557 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:37:03.970973 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:37:03.979020 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:37:03.984696 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:37:03.994292 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:37:03.996379 kernel: fuse: init (API version 7.39) Apr 30 00:37:04.005990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:37:04.013994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:37:04.026088 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:37:04.038543 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:37:04.038706 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:37:04.047390 kernel: ACPI: bus type drm_connector registered Apr 30 00:37:04.045992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:37:04.046136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:37:04.052736 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:37:04.052884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:37:04.058861 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:37:04.059007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:37:04.066737 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:37:04.066882 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:37:04.073002 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:37:04.073195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:37:04.079712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:37:04.086238 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:37:04.093328 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:37:04.101151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:37:04.115830 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:37:04.125471 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:37:04.144532 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:37:04.150663 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:37:04.291506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:37:04.298758 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:37:04.305075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:37:04.307536 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:37:04.313460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:37:04.314620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:37:04.322528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:37:04.330387 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:37:04.342167 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:37:04.349743 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:37:04.359282 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:37:04.371062 systemd-journald[1265]: Time spent on flushing to /var/log/journal/6165ae8b863c4fb6b6d5ec985b44540d is 260.655ms for 885 entries. Apr 30 00:37:04.371062 systemd-journald[1265]: System Journal (/var/log/journal/6165ae8b863c4fb6b6d5ec985b44540d) is 11.8M, max 2.6G, 2.6G free. Apr 30 00:37:05.315791 systemd-journald[1265]: Received client request to flush runtime journal. Apr 30 00:37:05.315867 systemd-journald[1265]: /var/log/journal/6165ae8b863c4fb6b6d5ec985b44540d/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Apr 30 00:37:05.315890 systemd-journald[1265]: Rotating system journal. Apr 30 00:37:04.373285 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:37:04.384825 udevadm[1329]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:37:04.445882 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:37:04.815545 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Apr 30 00:37:04.815556 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Apr 30 00:37:04.819780 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:37:04.830578 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:37:05.318789 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:37:05.790796 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:37:05.802735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:37:05.816706 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Apr 30 00:37:05.817013 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Apr 30 00:37:05.823509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:37:07.302799 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:37:07.313510 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:37:07.338359 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Apr 30 00:37:07.464144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:37:07.483906 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:37:07.520108 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 30 00:37:07.561622 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:37:07.612160 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:37:07.625397 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:37:07.660133 kernel: hv_vmbus: registering driver hv_balloon Apr 30 00:37:07.660206 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 00:37:07.666244 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 30 00:37:07.677288 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 00:37:07.677413 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 00:37:07.684043 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 00:37:07.689086 kernel: Console: switching to colour dummy device 80x25 Apr 30 00:37:07.691451 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:37:07.725859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:37:07.743357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:37:07.744700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:37:07.756604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:37:07.784463 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1373) Apr 30 00:37:07.787965 systemd-networkd[1368]: lo: Link UP Apr 30 00:37:07.787973 systemd-networkd[1368]: lo: Gained carrier Apr 30 00:37:07.790968 systemd-networkd[1368]: Enumeration completed Apr 30 00:37:07.791086 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:37:07.792724 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:37:07.792730 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:37:07.807648 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:37:07.849384 kernel: mlx5_core 282c:00:02.0 enP10284s1: Link up Apr 30 00:37:07.864849 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:37:07.878403 kernel: hv_netvsc 002248b4-e227-0022-48b4-e227002248b4 eth0: Data path switched to VF: enP10284s1 Apr 30 00:37:07.878680 systemd-networkd[1368]: enP10284s1: Link UP Apr 30 00:37:07.878775 systemd-networkd[1368]: eth0: Link UP Apr 30 00:37:07.878785 systemd-networkd[1368]: eth0: Gained carrier Apr 30 00:37:07.878798 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:37:07.882604 systemd-networkd[1368]: enP10284s1: Gained carrier Apr 30 00:37:07.896433 systemd-networkd[1368]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:37:07.966860 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:37:07.977500 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:37:08.100219 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:37:08.129910 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:37:08.137913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:37:08.149505 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:37:08.155607 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:37:08.179149 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:37:08.186769 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:37:08.195357 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:37:08.195410 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:37:08.201700 systemd[1]: Reached target machines.target - Containers. Apr 30 00:37:08.208129 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:37:08.223496 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:37:08.230876 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:37:08.237150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:37:08.238302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:37:08.247652 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:37:08.259562 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:37:08.267047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:37:08.274485 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:37:08.309427 kernel: loop0: detected capacity change from 0 to 194096 Apr 30 00:37:08.335572 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:37:08.346744 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:37:08.351295 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:37:08.363517 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:37:08.408389 kernel: loop1: detected capacity change from 0 to 114328 Apr 30 00:37:08.789615 kernel: loop2: detected capacity change from 0 to 114432 Apr 30 00:37:09.365395 kernel: loop3: detected capacity change from 0 to 31320 Apr 30 00:37:09.788548 systemd-networkd[1368]: enP10284s1: Gained IPv6LL Apr 30 00:37:09.865383 kernel: loop4: detected capacity change from 0 to 194096 Apr 30 00:37:09.876397 kernel: loop5: detected capacity change from 0 to 114328 Apr 30 00:37:09.886401 kernel: loop6: detected capacity change from 0 to 114432 Apr 30 00:37:09.894403 kernel: loop7: detected capacity change from 0 to 31320 Apr 30 00:37:09.896541 (sd-merge)[1473]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 00:37:09.896949 (sd-merge)[1473]: Merged extensions into '/usr'. Apr 30 00:37:09.899871 systemd[1]: Reloading requested from client PID 1459 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:37:09.899886 systemd[1]: Reloading... Apr 30 00:37:09.916450 systemd-networkd[1368]: eth0: Gained IPv6LL Apr 30 00:37:09.953413 zram_generator::config[1503]: No configuration found. Apr 30 00:37:10.088030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:37:10.160966 systemd[1]: Reloading finished in 260 ms. Apr 30 00:37:10.177402 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:37:10.185100 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:37:10.198494 systemd[1]: Starting ensure-sysext.service... Apr 30 00:37:10.203600 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:37:10.212575 systemd[1]: Reloading requested from client PID 1565 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:37:10.212593 systemd[1]: Reloading... Apr 30 00:37:10.227353 systemd-tmpfiles[1566]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:37:10.228790 systemd-tmpfiles[1566]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:37:10.229654 systemd-tmpfiles[1566]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:37:10.229961 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Apr 30 00:37:10.230076 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Apr 30 00:37:10.247281 systemd-tmpfiles[1566]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:37:10.247482 systemd-tmpfiles[1566]: Skipping /boot Apr 30 00:37:10.256950 systemd-tmpfiles[1566]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:37:10.257062 systemd-tmpfiles[1566]: Skipping /boot Apr 30 00:37:10.263400 zram_generator::config[1595]: No configuration found. Apr 30 00:37:10.396561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:37:10.474882 systemd[1]: Reloading finished in 262 ms. Apr 30 00:37:10.490509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:37:10.510658 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:37:10.518521 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:37:10.526973 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:37:10.536606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:37:10.546749 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:37:10.567520 systemd[1]: Finished ensure-sysext.service. Apr 30 00:37:10.573881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:37:10.577603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:37:10.593576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:37:10.612551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:37:10.620508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:37:10.629533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:37:10.629589 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:37:10.635295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:37:10.635470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:37:10.644036 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:37:10.644179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:37:10.650322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:37:10.650612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:37:10.658187 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:37:10.658904 systemd-resolved[1665]: Positive Trust Anchors: Apr 30 00:37:10.658914 systemd-resolved[1665]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:37:10.658944 systemd-resolved[1665]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:37:10.660580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:37:10.671458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:37:10.671667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:37:10.673316 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:37:10.684328 systemd-resolved[1665]: Using system hostname 'ci-4081.3.3-a-2b660cb835'. Apr 30 00:37:10.685915 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:37:10.692711 systemd[1]: Reached target network.target - Network. Apr 30 00:37:10.697498 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:37:10.703425 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:37:10.735891 augenrules[1698]: No rules Apr 30 00:37:10.737784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:37:10.905828 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:37:11.261918 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:37:11.269226 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:37:14.883829 ldconfig[1455]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:37:14.907774 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:37:14.919474 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:37:14.927596 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:37:14.933802 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:37:14.941318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:37:14.948286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:37:14.955252 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:37:14.961256 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:37:14.967885 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:37:14.974880 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:37:14.974911 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:37:14.979738 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:37:14.999808 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:37:15.007075 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:37:15.013233 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:37:15.020290 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:37:15.026202 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:37:15.031437 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:37:15.036703 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:37:15.036746 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:37:15.036763 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:37:15.038397 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 00:37:15.045465 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:37:15.054510 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:37:15.062962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:37:15.078456 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:37:15.087424 (chronyd)[1717]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 00:37:15.087855 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:37:15.093234 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:37:15.093394 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 00:37:15.096175 jq[1724]: false Apr 30 00:37:15.101530 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 00:37:15.103514 KVP[1728]: KVP starting; pid is:1728 Apr 30 00:37:15.107969 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 00:37:15.109771 chronyd[1731]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 00:37:15.110090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:15.118640 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:37:15.126968 chronyd[1731]: Timezone right/UTC failed leap second check, ignoring Apr 30 00:37:15.127170 chronyd[1731]: Loaded seccomp filter (level 2) Apr 30 00:37:15.127424 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:37:15.135594 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:37:15.149383 extend-filesystems[1725]: Found loop4 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found loop5 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found loop6 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found loop7 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda1 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda2 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda3 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found usr Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda4 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda6 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda7 Apr 30 00:37:15.149383 extend-filesystems[1725]: Found sda9 Apr 30 00:37:15.149383 extend-filesystems[1725]: Checking size of /dev/sda9 Apr 30 00:37:15.305542 kernel: hv_utils: KVP IC version 4.0 Apr 30 00:37:15.153087 KVP[1728]: KVP LIC Version: 3.1 Apr 30 00:37:15.149928 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:37:15.305761 extend-filesystems[1725]: Old size kept for /dev/sda9 Apr 30 00:37:15.305761 extend-filesystems[1725]: Found sr0 Apr 30 00:37:15.244727 dbus-daemon[1721]: [system] SELinux support is enabled Apr 30 00:37:15.385216 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1776) Apr 30 00:37:15.165797 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:37:15.385465 coreos-metadata[1719]: Apr 30 00:37:15.367 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:37:15.385465 coreos-metadata[1719]: Apr 30 00:37:15.370 INFO Fetch successful Apr 30 00:37:15.385465 coreos-metadata[1719]: Apr 30 00:37:15.370 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 00:37:15.385465 coreos-metadata[1719]: Apr 30 00:37:15.375 INFO Fetch successful Apr 30 00:37:15.385465 coreos-metadata[1719]: Apr 30 00:37:15.375 INFO Fetching http://168.63.129.16/machine/228e3882-e7c0-47d7-996b-fe17885a0c6b/aef064fc%2D5157%2D4211%2Db46c%2Df711e6b80a90.%5Fci%2D4081.3.3%2Da%2D2b660cb835?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 00:37:15.385465 coreos-metadata[1719]: Apr 30 00:37:15.379 INFO Fetch successful Apr 30 00:37:15.385465 coreos-metadata[1719]: Apr 30 00:37:15.379 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:37:15.187027 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:37:15.205853 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:37:15.395879 update_engine[1753]: I20250430 00:37:15.341813 1753 main.cc:92] Flatcar Update Engine starting Apr 30 00:37:15.395879 update_engine[1753]: I20250430 00:37:15.350214 1753 update_check_scheduler.cc:74] Next update check in 4m44s Apr 30 00:37:15.220818 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:37:15.396161 jq[1755]: true Apr 30 00:37:15.244504 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:37:15.267392 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:37:15.281698 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 00:37:15.298731 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:37:15.298975 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:37:15.299206 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:37:15.301337 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:37:15.388909 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:37:15.389133 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:37:15.397386 coreos-metadata[1719]: Apr 30 00:37:15.396 INFO Fetch successful Apr 30 00:37:15.399946 systemd-logind[1748]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:37:15.400104 systemd-logind[1748]: New seat seat0. Apr 30 00:37:15.409702 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:37:15.423785 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:37:15.439625 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:37:15.439850 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:37:15.490127 jq[1815]: true Apr 30 00:37:15.493025 (ntainerd)[1816]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:37:15.506344 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:37:15.524409 tar[1812]: linux-arm64/helm Apr 30 00:37:15.520569 dbus-daemon[1721]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 00:37:15.536663 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:37:15.546006 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:37:15.546178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:37:15.546301 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:37:15.553599 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:37:15.553708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:37:15.561660 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:37:15.571561 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:37:15.651231 bash[1851]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:37:15.655764 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:37:15.670935 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:37:15.849442 locksmithd[1850]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:37:16.104918 tar[1812]: linux-arm64/LICENSE Apr 30 00:37:16.104918 tar[1812]: linux-arm64/README.md Apr 30 00:37:16.130734 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:37:16.284263 containerd[1816]: time="2025-04-30T00:37:16.284179160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:37:16.336402 containerd[1816]: time="2025-04-30T00:37:16.335714800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:37:16.337478 containerd[1816]: time="2025-04-30T00:37:16.337441160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:37:16.337478 containerd[1816]: time="2025-04-30T00:37:16.337475320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:37:16.337552 containerd[1816]: time="2025-04-30T00:37:16.337491360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:37:16.337662 containerd[1816]: time="2025-04-30T00:37:16.337633160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:37:16.337753 containerd[1816]: time="2025-04-30T00:37:16.337654840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:37:16.337837 containerd[1816]: time="2025-04-30T00:37:16.337814880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:37:16.337837 containerd[1816]: time="2025-04-30T00:37:16.337834360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338384 containerd[1816]: time="2025-04-30T00:37:16.338030320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338384 containerd[1816]: time="2025-04-30T00:37:16.338050040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338384 containerd[1816]: time="2025-04-30T00:37:16.338062360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338384 containerd[1816]: time="2025-04-30T00:37:16.338071920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338384 containerd[1816]: time="2025-04-30T00:37:16.338137040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338384 containerd[1816]: time="2025-04-30T00:37:16.338315040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338510 containerd[1816]: time="2025-04-30T00:37:16.338458640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:37:16.338510 containerd[1816]: time="2025-04-30T00:37:16.338473880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:37:16.339424 containerd[1816]: time="2025-04-30T00:37:16.338547760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:37:16.339424 containerd[1816]: time="2025-04-30T00:37:16.338599760Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:37:16.366674 containerd[1816]: time="2025-04-30T00:37:16.366602320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:37:16.366981 containerd[1816]: time="2025-04-30T00:37:16.366795720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:37:16.366981 containerd[1816]: time="2025-04-30T00:37:16.366821400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:37:16.366981 containerd[1816]: time="2025-04-30T00:37:16.366923120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:37:16.366981 containerd[1816]: time="2025-04-30T00:37:16.366942440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:37:16.368292 containerd[1816]: time="2025-04-30T00:37:16.368174680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369764400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369885880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369901160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369914120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369930520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369945240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369957200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369970800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369984520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.369997800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.370017160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370565 containerd[1816]: time="2025-04-30T00:37:16.370029000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:37:16.370873 containerd[1816]: time="2025-04-30T00:37:16.370855040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.370942 containerd[1816]: time="2025-04-30T00:37:16.370930880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371016 containerd[1816]: time="2025-04-30T00:37:16.371003720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371092 containerd[1816]: time="2025-04-30T00:37:16.371080640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371142 containerd[1816]: time="2025-04-30T00:37:16.371131280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371209 containerd[1816]: time="2025-04-30T00:37:16.371197880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371267 containerd[1816]: time="2025-04-30T00:37:16.371246720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371337 containerd[1816]: time="2025-04-30T00:37:16.371319200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371406 containerd[1816]: time="2025-04-30T00:37:16.371394040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371468 containerd[1816]: time="2025-04-30T00:37:16.371458200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371522 containerd[1816]: time="2025-04-30T00:37:16.371510840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371648 containerd[1816]: time="2025-04-30T00:37:16.371581320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371648 containerd[1816]: time="2025-04-30T00:37:16.371597760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371648 containerd[1816]: time="2025-04-30T00:37:16.371614400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:37:16.371753 containerd[1816]: time="2025-04-30T00:37:16.371740720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371882 containerd[1816]: time="2025-04-30T00:37:16.371809880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.371882 containerd[1816]: time="2025-04-30T00:37:16.371825200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:37:16.371962 containerd[1816]: time="2025-04-30T00:37:16.371950320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:37:16.372161 containerd[1816]: time="2025-04-30T00:37:16.372071640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:37:16.372161 containerd[1816]: time="2025-04-30T00:37:16.372098920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:37:16.372161 containerd[1816]: time="2025-04-30T00:37:16.372112320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:37:16.372161 containerd[1816]: time="2025-04-30T00:37:16.372121960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.372161 containerd[1816]: time="2025-04-30T00:37:16.372134560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:37:16.372161 containerd[1816]: time="2025-04-30T00:37:16.372144600Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:37:16.372375 containerd[1816]: time="2025-04-30T00:37:16.372305760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:37:16.372743 containerd[1816]: time="2025-04-30T00:37:16.372681720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:37:16.373009 containerd[1816]: time="2025-04-30T00:37:16.372841880Z" level=info msg="Connect containerd service" Apr 30 00:37:16.373993 containerd[1816]: time="2025-04-30T00:37:16.373693480Z" level=info msg="using legacy CRI server" Apr 30 00:37:16.373993 containerd[1816]: time="2025-04-30T00:37:16.373711920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:37:16.373993 containerd[1816]: time="2025-04-30T00:37:16.373815480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:37:16.375639 containerd[1816]: time="2025-04-30T00:37:16.375527640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.375607240Z" level=info msg="Start subscribing containerd event" Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.376046400Z" level=info msg="Start recovering state" Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.376276480Z" level=info msg="Start event monitor" Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.376290960Z" level=info msg="Start snapshots syncer" Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.376305520Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.376315120Z" level=info msg="Start streaming server" Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.376663600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:37:16.382653 containerd[1816]: time="2025-04-30T00:37:16.376706680Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:37:16.376862 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:37:16.385456 containerd[1816]: time="2025-04-30T00:37:16.385415120Z" level=info msg="containerd successfully booted in 0.105012s" Apr 30 00:37:16.428509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:16.434870 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:16.476562 sshd_keygen[1759]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:37:16.496152 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:37:16.508700 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:37:16.516439 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 00:37:16.524359 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:37:16.524608 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:37:16.540836 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:37:16.550525 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 00:37:16.568053 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:37:16.582619 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:37:16.597927 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:37:16.605462 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:37:16.612156 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:37:16.619610 systemd[1]: Startup finished in 19.940s (kernel) + 17.705s (userspace) = 37.646s. Apr 30 00:37:16.942911 login[1914]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:16.946161 kubelet[1882]: E0430 00:37:16.944855 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:16.948471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:16.948648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:16.949463 login[1915]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:16.960044 systemd-logind[1748]: New session 1 of user core. Apr 30 00:37:16.961798 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:37:16.969573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:37:16.973438 systemd-logind[1748]: New session 2 of user core. Apr 30 00:37:16.981012 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:37:16.986604 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:37:16.997471 (systemd)[1928]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:37:17.113268 systemd[1928]: Queued start job for default target default.target. Apr 30 00:37:17.113660 systemd[1928]: Created slice app.slice - User Application Slice. Apr 30 00:37:17.113683 systemd[1928]: Reached target paths.target - Paths. Apr 30 00:37:17.113694 systemd[1928]: Reached target timers.target - Timers. Apr 30 00:37:17.120448 systemd[1928]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:37:17.126939 systemd[1928]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:37:17.126999 systemd[1928]: Reached target sockets.target - Sockets. Apr 30 00:37:17.127011 systemd[1928]: Reached target basic.target - Basic System. Apr 30 00:37:17.127053 systemd[1928]: Reached target default.target - Main User Target. Apr 30 00:37:17.127077 systemd[1928]: Startup finished in 123ms. Apr 30 00:37:17.127373 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:37:17.136596 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:37:17.137902 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:37:18.499385 waagent[1907]: 2025-04-30T00:37:18.497921Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 00:37:18.503544 waagent[1907]: 2025-04-30T00:37:18.503489Z INFO Daemon Daemon OS: flatcar 4081.3.3 Apr 30 00:37:18.507941 waagent[1907]: 2025-04-30T00:37:18.507892Z INFO Daemon Daemon Python: 3.11.9 Apr 30 00:37:18.512062 waagent[1907]: 2025-04-30T00:37:18.512015Z INFO Daemon Daemon Run daemon Apr 30 00:37:18.515812 waagent[1907]: 2025-04-30T00:37:18.515767Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' Apr 30 00:37:18.524325 waagent[1907]: 2025-04-30T00:37:18.524279Z INFO Daemon Daemon Using waagent for provisioning Apr 30 00:37:18.529283 waagent[1907]: 2025-04-30T00:37:18.529244Z INFO Daemon Daemon Activate resource disk Apr 30 00:37:18.533618 waagent[1907]: 2025-04-30T00:37:18.533580Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 00:37:18.544191 waagent[1907]: 2025-04-30T00:37:18.544147Z INFO Daemon Daemon Found device: None Apr 30 00:37:18.548581 waagent[1907]: 2025-04-30T00:37:18.548542Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 00:37:18.556634 waagent[1907]: 2025-04-30T00:37:18.556596Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 00:37:18.568591 waagent[1907]: 2025-04-30T00:37:18.568545Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 00:37:18.574106 waagent[1907]: 2025-04-30T00:37:18.574065Z INFO Daemon Daemon Running default provisioning handler Apr 30 00:37:18.585146 waagent[1907]: 2025-04-30T00:37:18.585096Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 00:37:18.598020 waagent[1907]: 2025-04-30T00:37:18.597970Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 00:37:18.607165 waagent[1907]: 2025-04-30T00:37:18.607125Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 00:37:18.611931 waagent[1907]: 2025-04-30T00:37:18.611892Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 00:37:18.750098 waagent[1907]: 2025-04-30T00:37:18.749421Z INFO Daemon Daemon Successfully mounted dvd Apr 30 00:37:18.764143 waagent[1907]: 2025-04-30T00:37:18.764072Z INFO Daemon Daemon Detect protocol endpoint Apr 30 00:37:18.764675 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 00:37:18.769021 waagent[1907]: 2025-04-30T00:37:18.768967Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 00:37:18.774805 waagent[1907]: 2025-04-30T00:37:18.774758Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 00:37:18.780969 waagent[1907]: 2025-04-30T00:37:18.780925Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 00:37:18.785986 waagent[1907]: 2025-04-30T00:37:18.785945Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 00:37:18.790996 waagent[1907]: 2025-04-30T00:37:18.790951Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 00:37:18.822664 waagent[1907]: 2025-04-30T00:37:18.822620Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 00:37:18.829134 waagent[1907]: 2025-04-30T00:37:18.829106Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 00:37:18.834235 waagent[1907]: 2025-04-30T00:37:18.834198Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 00:37:19.125322 waagent[1907]: 2025-04-30T00:37:19.125173Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 00:37:19.131953 waagent[1907]: 2025-04-30T00:37:19.131896Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 00:37:19.140753 waagent[1907]: 2025-04-30T00:37:19.140706Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 00:37:19.182642 waagent[1907]: 2025-04-30T00:37:19.182597Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 00:37:19.188470 waagent[1907]: 2025-04-30T00:37:19.188429Z INFO Daemon Apr 30 00:37:19.191150 waagent[1907]: 2025-04-30T00:37:19.191102Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e89aa7af-dfa0-4361-9b30-7ec98ec5923d eTag: 7970482241915952125 source: Fabric] Apr 30 00:37:19.202203 waagent[1907]: 2025-04-30T00:37:19.202161Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 00:37:19.208971 waagent[1907]: 2025-04-30T00:37:19.208930Z INFO Daemon Apr 30 00:37:19.211759 waagent[1907]: 2025-04-30T00:37:19.211721Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 00:37:19.222405 waagent[1907]: 2025-04-30T00:37:19.222356Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 00:37:19.381140 waagent[1907]: 2025-04-30T00:37:19.381013Z INFO Daemon Downloaded certificate {'thumbprint': 'D4FB27EF512094793455BE15812D35A07E872E8F', 'hasPrivateKey': False} Apr 30 00:37:19.390689 waagent[1907]: 2025-04-30T00:37:19.390640Z INFO Daemon Downloaded certificate {'thumbprint': '9173F014F94EC42E4F4C12273EB127E07F2DB645', 'hasPrivateKey': True} Apr 30 00:37:19.399883 waagent[1907]: 2025-04-30T00:37:19.399838Z INFO Daemon Fetch goal state completed Apr 30 00:37:19.446292 waagent[1907]: 2025-04-30T00:37:19.446231Z INFO Daemon Daemon Starting provisioning Apr 30 00:37:19.451177 waagent[1907]: 2025-04-30T00:37:19.451131Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 00:37:19.455642 waagent[1907]: 2025-04-30T00:37:19.455603Z INFO Daemon Daemon Set hostname [ci-4081.3.3-a-2b660cb835] Apr 30 00:37:19.477651 waagent[1907]: 2025-04-30T00:37:19.477594Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-a-2b660cb835] Apr 30 00:37:19.483932 waagent[1907]: 2025-04-30T00:37:19.483886Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 00:37:19.490099 waagent[1907]: 2025-04-30T00:37:19.490059Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 00:37:19.518889 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:37:19.518895 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:37:19.518920 systemd-networkd[1368]: eth0: DHCP lease lost Apr 30 00:37:19.520599 waagent[1907]: 2025-04-30T00:37:19.520529Z INFO Daemon Daemon Create user account if not exists Apr 30 00:37:19.525824 waagent[1907]: 2025-04-30T00:37:19.525778Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 00:37:19.531472 waagent[1907]: 2025-04-30T00:37:19.531430Z INFO Daemon Daemon Configure sudoer Apr 30 00:37:19.535746 waagent[1907]: 2025-04-30T00:37:19.535699Z INFO Daemon Daemon Configure sshd Apr 30 00:37:19.540080 waagent[1907]: 2025-04-30T00:37:19.540036Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 00:37:19.552243 waagent[1907]: 2025-04-30T00:37:19.552201Z INFO Daemon Daemon Deploy ssh public key. Apr 30 00:37:19.556511 systemd-networkd[1368]: eth0: DHCPv6 lease lost Apr 30 00:37:19.572409 systemd-networkd[1368]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:37:20.708422 waagent[1907]: 2025-04-30T00:37:20.703033Z INFO Daemon Daemon Provisioning complete Apr 30 00:37:20.721660 waagent[1907]: 2025-04-30T00:37:20.721616Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 00:37:20.728440 waagent[1907]: 2025-04-30T00:37:20.728392Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 00:37:20.737483 waagent[1907]: 2025-04-30T00:37:20.737443Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 00:37:20.861975 waagent[1988]: 2025-04-30T00:37:20.861358Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 00:37:20.861975 waagent[1988]: 2025-04-30T00:37:20.861521Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 Apr 30 00:37:20.861975 waagent[1988]: 2025-04-30T00:37:20.861575Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 30 00:37:20.908435 waagent[1988]: 2025-04-30T00:37:20.908340Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 00:37:20.908728 waagent[1988]: 2025-04-30T00:37:20.908692Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:37:20.908875 waagent[1988]: 2025-04-30T00:37:20.908842Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:37:20.916919 waagent[1988]: 2025-04-30T00:37:20.916866Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 00:37:20.922290 waagent[1988]: 2025-04-30T00:37:20.922254Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 00:37:20.922836 waagent[1988]: 2025-04-30T00:37:20.922797Z INFO ExtHandler Apr 30 00:37:20.922974 waagent[1988]: 2025-04-30T00:37:20.922942Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0912b082-2319-4f0b-b81d-3684c7b20339 eTag: 7970482241915952125 source: Fabric] Apr 30 00:37:20.923318 waagent[1988]: 2025-04-30T00:37:20.923281Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 00:37:20.923982 waagent[1988]: 2025-04-30T00:37:20.923941Z INFO ExtHandler Apr 30 00:37:20.924684 waagent[1988]: 2025-04-30T00:37:20.924084Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 00:37:20.929381 waagent[1988]: 2025-04-30T00:37:20.928096Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 00:37:21.016904 waagent[1988]: 2025-04-30T00:37:21.016789Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D4FB27EF512094793455BE15812D35A07E872E8F', 'hasPrivateKey': False} Apr 30 00:37:21.017439 waagent[1988]: 2025-04-30T00:37:21.017397Z INFO ExtHandler Downloaded certificate {'thumbprint': '9173F014F94EC42E4F4C12273EB127E07F2DB645', 'hasPrivateKey': True} Apr 30 00:37:21.017970 waagent[1988]: 2025-04-30T00:37:21.017928Z INFO ExtHandler Fetch goal state completed Apr 30 00:37:21.033786 waagent[1988]: 2025-04-30T00:37:21.033737Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1988 Apr 30 00:37:21.034090 waagent[1988]: 2025-04-30T00:37:21.034054Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 00:37:21.035767 waagent[1988]: 2025-04-30T00:37:21.035726Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 00:37:21.036220 waagent[1988]: 2025-04-30T00:37:21.036183Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 00:37:21.069601 waagent[1988]: 2025-04-30T00:37:21.069565Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 00:37:21.069907 waagent[1988]: 2025-04-30T00:37:21.069871Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 00:37:21.076135 waagent[1988]: 2025-04-30T00:37:21.076106Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 00:37:21.082464 systemd[1]: Reloading requested from client PID 2003 ('systemctl') (unit waagent.service)... Apr 30 00:37:21.082484 systemd[1]: Reloading... Apr 30 00:37:21.132582 zram_generator::config[2036]: No configuration found. Apr 30 00:37:21.254220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:37:21.331850 systemd[1]: Reloading finished in 249 ms. Apr 30 00:37:21.351892 waagent[1988]: 2025-04-30T00:37:21.351545Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 00:37:21.357108 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit waagent.service)... Apr 30 00:37:21.357128 systemd[1]: Reloading... Apr 30 00:37:21.423391 zram_generator::config[2130]: No configuration found. Apr 30 00:37:21.537331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:37:21.615393 systemd[1]: Reloading finished in 257 ms. Apr 30 00:37:21.638294 waagent[1988]: 2025-04-30T00:37:21.637550Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 00:37:21.638294 waagent[1988]: 2025-04-30T00:37:21.637709Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 00:37:21.961488 waagent[1988]: 2025-04-30T00:37:21.961331Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 00:37:21.962358 waagent[1988]: 2025-04-30T00:37:21.962313Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 00:37:21.963248 waagent[1988]: 2025-04-30T00:37:21.963179Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 00:37:21.963464 waagent[1988]: 2025-04-30T00:37:21.963409Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:37:21.963826 waagent[1988]: 2025-04-30T00:37:21.963777Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 00:37:21.963987 waagent[1988]: 2025-04-30T00:37:21.963887Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:37:21.964095 waagent[1988]: 2025-04-30T00:37:21.964034Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:37:21.964497 waagent[1988]: 2025-04-30T00:37:21.964447Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 00:37:21.964771 waagent[1988]: 2025-04-30T00:37:21.964727Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 00:37:21.964921 waagent[1988]: 2025-04-30T00:37:21.964837Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 00:37:21.965404 waagent[1988]: 2025-04-30T00:37:21.965328Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 00:37:21.965537 waagent[1988]: 2025-04-30T00:37:21.965483Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 00:37:21.965614 waagent[1988]: 2025-04-30T00:37:21.965575Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:37:21.965816 waagent[1988]: 2025-04-30T00:37:21.965748Z INFO EnvHandler ExtHandler Configure routes Apr 30 00:37:21.966210 waagent[1988]: 2025-04-30T00:37:21.966160Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 00:37:21.966210 waagent[1988]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 00:37:21.966210 waagent[1988]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 00:37:21.966210 waagent[1988]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 00:37:21.966210 waagent[1988]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:37:21.966210 waagent[1988]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:37:21.966210 waagent[1988]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:37:21.966910 waagent[1988]: 2025-04-30T00:37:21.966797Z INFO EnvHandler ExtHandler Gateway:None Apr 30 00:37:21.966910 waagent[1988]: 2025-04-30T00:37:21.966703Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 00:37:21.967129 waagent[1988]: 2025-04-30T00:37:21.967096Z INFO EnvHandler ExtHandler Routes:None Apr 30 00:37:21.975382 waagent[1988]: 2025-04-30T00:37:21.975299Z INFO ExtHandler ExtHandler Apr 30 00:37:21.975647 waagent[1988]: 2025-04-30T00:37:21.975603Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 427bb411-7dd6-46e4-aa34-8bfb076a65d4 correlation c6fcc87d-4d23-4dd6-b58e-4fa21ff13c04 created: 2025-04-30T00:35:52.769302Z] Apr 30 00:37:21.976816 waagent[1988]: 2025-04-30T00:37:21.976678Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 00:37:21.979398 waagent[1988]: 2025-04-30T00:37:21.978966Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Apr 30 00:37:22.019069 waagent[1988]: 2025-04-30T00:37:22.019017Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 71A5B6CE-71CA-4957-B8D7-90FA5C80AB6F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 00:37:22.052699 waagent[1988]: 2025-04-30T00:37:22.052614Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 00:37:22.052699 waagent[1988]: Executing ['ip', '-a', '-o', 'link']: Apr 30 00:37:22.052699 waagent[1988]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 00:37:22.052699 waagent[1988]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:e2:27 brd ff:ff:ff:ff:ff:ff Apr 30 00:37:22.052699 waagent[1988]: 3: enP10284s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:e2:27 brd ff:ff:ff:ff:ff:ff\ altname enP10284p0s2 Apr 30 00:37:22.052699 waagent[1988]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 00:37:22.052699 waagent[1988]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 00:37:22.052699 waagent[1988]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 00:37:22.052699 waagent[1988]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 00:37:22.052699 waagent[1988]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 00:37:22.052699 waagent[1988]: 2: eth0 inet6 fe80::222:48ff:feb4:e227/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 00:37:22.052699 waagent[1988]: 3: enP10284s1 inet6 fe80::222:48ff:feb4:e227/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 00:37:22.464221 waagent[1988]: 2025-04-30T00:37:22.463353Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 00:37:22.464221 waagent[1988]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:37:22.464221 waagent[1988]: pkts bytes target prot opt in out source destination Apr 30 00:37:22.464221 waagent[1988]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:37:22.464221 waagent[1988]: pkts bytes target prot opt in out source destination Apr 30 00:37:22.464221 waagent[1988]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:37:22.464221 waagent[1988]: pkts bytes target prot opt in out source destination Apr 30 00:37:22.464221 waagent[1988]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 00:37:22.464221 waagent[1988]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 00:37:22.464221 waagent[1988]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 00:37:22.466092 waagent[1988]: 2025-04-30T00:37:22.466050Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 00:37:22.466092 waagent[1988]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:37:22.466092 waagent[1988]: pkts bytes target prot opt in out source destination Apr 30 00:37:22.466092 waagent[1988]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:37:22.466092 waagent[1988]: pkts bytes target prot opt in out source destination Apr 30 00:37:22.466092 waagent[1988]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:37:22.466092 waagent[1988]: pkts bytes target prot opt in out source destination Apr 30 00:37:22.466092 waagent[1988]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 00:37:22.466092 waagent[1988]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 00:37:22.466092 waagent[1988]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 00:37:22.466589 waagent[1988]: 2025-04-30T00:37:22.466558Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 00:37:27.199273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:37:27.208516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:27.298071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:27.300934 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:27.340644 kubelet[2235]: E0430 00:37:27.340566 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:27.344602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:27.344891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:37.461138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:37:37.469515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:37.551386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:37.553853 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:37.592944 kubelet[2256]: E0430 00:37:37.592881 2256 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:37.596037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:37.596193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:38.912084 chronyd[1731]: Selected source PHC0 Apr 30 00:37:39.700287 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:37:39.712722 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.16.10:56794.service - OpenSSH per-connection server daemon (10.200.16.10:56794). Apr 30 00:37:40.237034 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 56794 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:40.238298 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:40.242342 systemd-logind[1748]: New session 3 of user core. Apr 30 00:37:40.248685 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:37:40.639696 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.16.10:56796.service - OpenSSH per-connection server daemon (10.200.16.10:56796). Apr 30 00:37:41.079540 sshd[2270]: Accepted publickey for core from 10.200.16.10 port 56796 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:41.080927 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:41.085449 systemd-logind[1748]: New session 4 of user core. Apr 30 00:37:41.092606 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:37:41.403575 sshd[2270]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:41.407521 systemd-logind[1748]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:37:41.408082 systemd[1]: sshd@1-10.200.20.17:22-10.200.16.10:56796.service: Deactivated successfully. Apr 30 00:37:41.409788 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:37:41.411239 systemd-logind[1748]: Removed session 4. Apr 30 00:37:41.483559 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.16.10:56810.service - OpenSSH per-connection server daemon (10.200.16.10:56810). Apr 30 00:37:41.922519 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 56810 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:41.923809 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:41.927429 systemd-logind[1748]: New session 5 of user core. Apr 30 00:37:41.933668 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:37:42.243554 sshd[2278]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:42.246045 systemd-logind[1748]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:37:42.246279 systemd[1]: sshd@2-10.200.20.17:22-10.200.16.10:56810.service: Deactivated successfully. Apr 30 00:37:42.249108 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:37:42.250275 systemd-logind[1748]: Removed session 5. Apr 30 00:37:42.318567 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.16.10:56812.service - OpenSSH per-connection server daemon (10.200.16.10:56812). Apr 30 00:37:42.729804 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 56812 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:42.731046 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:42.735858 systemd-logind[1748]: New session 6 of user core. Apr 30 00:37:42.741597 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:37:43.034359 sshd[2286]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:43.037518 systemd[1]: sshd@3-10.200.20.17:22-10.200.16.10:56812.service: Deactivated successfully. Apr 30 00:37:43.040745 systemd-logind[1748]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:37:43.041215 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:37:43.042325 systemd-logind[1748]: Removed session 6. Apr 30 00:37:43.114565 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.16.10:56814.service - OpenSSH per-connection server daemon (10.200.16.10:56814). Apr 30 00:37:43.559609 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 56814 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:43.560885 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:43.565795 systemd-logind[1748]: New session 7 of user core. Apr 30 00:37:43.571705 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:37:43.895532 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:37:43.895793 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:43.922049 sudo[2298]: pam_unix(sudo:session): session closed for user root Apr 30 00:37:43.992658 sshd[2294]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:43.995560 systemd-logind[1748]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:37:43.997541 systemd[1]: sshd@4-10.200.20.17:22-10.200.16.10:56814.service: Deactivated successfully. Apr 30 00:37:44.000096 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:37:44.001748 systemd-logind[1748]: Removed session 7. Apr 30 00:37:44.092600 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.16.10:56816.service - OpenSSH per-connection server daemon (10.200.16.10:56816). Apr 30 00:37:44.530264 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 56816 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:44.531567 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:44.536308 systemd-logind[1748]: New session 8 of user core. Apr 30 00:37:44.545624 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:37:44.782559 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:37:44.782813 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:44.785846 sudo[2308]: pam_unix(sudo:session): session closed for user root Apr 30 00:37:44.789808 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:37:44.790041 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:44.801562 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:37:44.802867 auditctl[2311]: No rules Apr 30 00:37:44.803260 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:37:44.803491 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:37:44.806737 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:37:44.827511 augenrules[2330]: No rules Apr 30 00:37:44.829048 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:37:44.831074 sudo[2307]: pam_unix(sudo:session): session closed for user root Apr 30 00:37:44.906581 sshd[2303]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:44.909744 systemd[1]: sshd@5-10.200.20.17:22-10.200.16.10:56816.service: Deactivated successfully. Apr 30 00:37:44.912712 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:37:44.913541 systemd-logind[1748]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:37:44.914579 systemd-logind[1748]: Removed session 8. Apr 30 00:37:44.989007 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.16.10:56818.service - OpenSSH per-connection server daemon (10.200.16.10:56818). Apr 30 00:37:45.429490 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 56818 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:45.430735 sshd[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:45.435553 systemd-logind[1748]: New session 9 of user core. Apr 30 00:37:45.440607 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:37:45.684077 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:37:45.684341 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:46.868672 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:37:46.869166 (dockerd)[2359]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:37:47.592083 dockerd[2359]: time="2025-04-30T00:37:47.592026217Z" level=info msg="Starting up" Apr 30 00:37:47.710895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:37:47.718592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:48.134139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:48.136838 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:48.177705 kubelet[2380]: E0430 00:37:48.177667 2380 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:48.180238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:48.180443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:48.534173 dockerd[2359]: time="2025-04-30T00:37:48.534123094Z" level=info msg="Loading containers: start." Apr 30 00:37:48.708408 kernel: Initializing XFRM netlink socket Apr 30 00:37:48.857600 systemd-networkd[1368]: docker0: Link UP Apr 30 00:37:48.901003 dockerd[2359]: time="2025-04-30T00:37:48.900399635Z" level=info msg="Loading containers: done." Apr 30 00:37:48.934949 dockerd[2359]: time="2025-04-30T00:37:48.934908082Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:37:48.935323 dockerd[2359]: time="2025-04-30T00:37:48.935155442Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:37:48.935495 dockerd[2359]: time="2025-04-30T00:37:48.935432441Z" level=info msg="Daemon has completed initialization" Apr 30 00:37:48.999041 dockerd[2359]: time="2025-04-30T00:37:48.998891906Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:37:48.999387 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:37:50.452514 containerd[1816]: time="2025-04-30T00:37:50.452476129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:37:51.431921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764471880.mount: Deactivated successfully. Apr 30 00:37:52.942403 containerd[1816]: time="2025-04-30T00:37:52.942043632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:52.946747 containerd[1816]: time="2025-04-30T00:37:52.946707017Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" Apr 30 00:37:52.951495 containerd[1816]: time="2025-04-30T00:37:52.951420723Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:52.959065 containerd[1816]: time="2025-04-30T00:37:52.958998939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:52.960549 containerd[1816]: time="2025-04-30T00:37:52.960230215Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.507712286s" Apr 30 00:37:52.960549 containerd[1816]: time="2025-04-30T00:37:52.960269295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:37:52.978296 containerd[1816]: time="2025-04-30T00:37:52.978230439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:37:54.498044 containerd[1816]: time="2025-04-30T00:37:54.497995651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:54.503798 containerd[1816]: time="2025-04-30T00:37:54.503763513Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" Apr 30 00:37:54.512658 containerd[1816]: time="2025-04-30T00:37:54.512624565Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:54.519811 containerd[1816]: time="2025-04-30T00:37:54.519765223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:54.520920 containerd[1816]: time="2025-04-30T00:37:54.520794100Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.542528701s" Apr 30 00:37:54.520920 containerd[1816]: time="2025-04-30T00:37:54.520828980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:37:54.539756 containerd[1816]: time="2025-04-30T00:37:54.539180842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:37:55.619728 containerd[1816]: time="2025-04-30T00:37:55.619670877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:55.623435 containerd[1816]: time="2025-04-30T00:37:55.623400945Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" Apr 30 00:37:55.629260 containerd[1816]: time="2025-04-30T00:37:55.629219807Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:55.634834 containerd[1816]: time="2025-04-30T00:37:55.634779470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:55.635966 containerd[1816]: time="2025-04-30T00:37:55.635846146Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.096629504s" Apr 30 00:37:55.635966 containerd[1816]: time="2025-04-30T00:37:55.635880226Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:37:55.655081 containerd[1816]: time="2025-04-30T00:37:55.654542888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:37:55.797389 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 30 00:37:56.754313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204228040.mount: Deactivated successfully. Apr 30 00:37:57.488408 containerd[1816]: time="2025-04-30T00:37:57.488023706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:57.492270 containerd[1816]: time="2025-04-30T00:37:57.492237173Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" Apr 30 00:37:57.495635 containerd[1816]: time="2025-04-30T00:37:57.495589323Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:57.501668 containerd[1816]: time="2025-04-30T00:37:57.501622864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:57.502418 containerd[1816]: time="2025-04-30T00:37:57.502174382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.847586494s" Apr 30 00:37:57.502418 containerd[1816]: time="2025-04-30T00:37:57.502210102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:37:57.520394 containerd[1816]: time="2025-04-30T00:37:57.520266646Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:37:58.170048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1240695420.mount: Deactivated successfully. Apr 30 00:37:58.210911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:37:58.220520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:58.453744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:58.456339 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:58.490566 kubelet[2626]: E0430 00:37:58.490463 2626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:58.493532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:58.493688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:59.620414 containerd[1816]: time="2025-04-30T00:37:59.619923196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:59.624340 containerd[1816]: time="2025-04-30T00:37:59.624292502Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Apr 30 00:37:59.629161 containerd[1816]: time="2025-04-30T00:37:59.629132087Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:59.636071 containerd[1816]: time="2025-04-30T00:37:59.636018386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:59.637258 containerd[1816]: time="2025-04-30T00:37:59.637146183Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.116845697s" Apr 30 00:37:59.637258 containerd[1816]: time="2025-04-30T00:37:59.637177142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:37:59.656777 containerd[1816]: time="2025-04-30T00:37:59.656747402Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:38:00.302666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560883402.mount: Deactivated successfully. Apr 30 00:38:00.340543 containerd[1816]: time="2025-04-30T00:38:00.340491915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:00.343815 containerd[1816]: time="2025-04-30T00:38:00.343784305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Apr 30 00:38:00.349393 containerd[1816]: time="2025-04-30T00:38:00.349322168Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:00.357501 containerd[1816]: time="2025-04-30T00:38:00.357431222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:00.358858 containerd[1816]: time="2025-04-30T00:38:00.358349620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 701.418459ms" Apr 30 00:38:00.358858 containerd[1816]: time="2025-04-30T00:38:00.358396499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:38:00.376448 containerd[1816]: time="2025-04-30T00:38:00.376375284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:38:00.986808 update_engine[1753]: I20250430 00:38:00.986744 1753 update_attempter.cc:509] Updating boot flags... Apr 30 00:38:01.056492 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2700) Apr 30 00:38:01.168430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569654114.mount: Deactivated successfully. Apr 30 00:38:04.681949 containerd[1816]: time="2025-04-30T00:38:04.681893968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:04.685753 containerd[1816]: time="2025-04-30T00:38:04.685521437Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Apr 30 00:38:04.690166 containerd[1816]: time="2025-04-30T00:38:04.690137983Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:04.696427 containerd[1816]: time="2025-04-30T00:38:04.696392323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:04.697625 containerd[1816]: time="2025-04-30T00:38:04.697596520Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.321185637s" Apr 30 00:38:04.697803 containerd[1816]: time="2025-04-30T00:38:04.697710079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:38:08.711045 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:38:08.719584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:08.989571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:08.990758 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:38:09.029857 kubelet[2843]: E0430 00:38:09.029821 2843 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:38:09.032728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:38:09.032962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:38:09.808653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:09.814552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:09.838169 systemd[1]: Reloading requested from client PID 2859 ('systemctl') (unit session-9.scope)... Apr 30 00:38:09.838190 systemd[1]: Reloading... Apr 30 00:38:09.932455 zram_generator::config[2903]: No configuration found. Apr 30 00:38:10.035010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:38:10.111063 systemd[1]: Reloading finished in 272 ms. Apr 30 00:38:10.150777 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:38:10.150987 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:38:10.151453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:10.164603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:10.249045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:10.252154 (kubelet)[2978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:38:10.290388 kubelet[2978]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:10.290388 kubelet[2978]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:38:10.290388 kubelet[2978]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:10.290388 kubelet[2978]: I0430 00:38:10.290357 2978 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:38:10.835241 kubelet[2978]: I0430 00:38:10.835211 2978 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:38:10.835392 kubelet[2978]: I0430 00:38:10.835384 2978 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:38:10.835639 kubelet[2978]: I0430 00:38:10.835626 2978 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:38:10.848662 kubelet[2978]: E0430 00:38:10.848624 2978 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.848774 kubelet[2978]: I0430 00:38:10.848753 2978 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:38:10.856122 kubelet[2978]: I0430 00:38:10.856093 2978 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:38:10.857383 kubelet[2978]: I0430 00:38:10.857334 2978 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:38:10.857550 kubelet[2978]: I0430 00:38:10.857386 2978 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-2b660cb835","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:38:10.857628 kubelet[2978]: I0430 00:38:10.857560 2978 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:38:10.857628 kubelet[2978]: I0430 00:38:10.857570 2978 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:38:10.857699 kubelet[2978]: I0430 00:38:10.857679 2978 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:10.858435 kubelet[2978]: I0430 00:38:10.858418 2978 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:38:10.858466 kubelet[2978]: I0430 00:38:10.858443 2978 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:38:10.858663 kubelet[2978]: I0430 00:38:10.858646 2978 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:38:10.858687 kubelet[2978]: I0430 00:38:10.858671 2978 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:38:10.861470 kubelet[2978]: W0430 00:38:10.861425 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-2b660cb835&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.861534 kubelet[2978]: E0430 00:38:10.861478 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-2b660cb835&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.861586 kubelet[2978]: I0430 00:38:10.861565 2978 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:38:10.861723 kubelet[2978]: I0430 00:38:10.861707 2978 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:38:10.861761 kubelet[2978]: W0430 00:38:10.861749 2978 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:38:10.862209 kubelet[2978]: I0430 00:38:10.862189 2978 server.go:1264] "Started kubelet" Apr 30 00:38:10.868384 kubelet[2978]: I0430 00:38:10.866141 2978 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:38:10.868384 kubelet[2978]: W0430 00:38:10.866835 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.868384 kubelet[2978]: E0430 00:38:10.866872 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.868384 kubelet[2978]: E0430 00:38:10.866907 2978 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-2b660cb835.183af1a3ae16a6f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-2b660cb835,UID:ci-4081.3.3-a-2b660cb835,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-2b660cb835,},FirstTimestamp:2025-04-30 00:38:10.862171889 +0000 UTC m=+0.606991391,LastTimestamp:2025-04-30 00:38:10.862171889 +0000 UTC m=+0.606991391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-2b660cb835,}" Apr 30 00:38:10.868384 kubelet[2978]: I0430 00:38:10.867954 2978 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:38:10.868957 kubelet[2978]: I0430 00:38:10.868937 2978 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:38:10.869845 kubelet[2978]: I0430 00:38:10.869798 2978 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:38:10.870101 kubelet[2978]: I0430 00:38:10.870088 2978 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:38:10.871082 kubelet[2978]: I0430 00:38:10.871051 2978 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:38:10.871721 kubelet[2978]: I0430 00:38:10.871568 2978 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:38:10.871721 kubelet[2978]: I0430 00:38:10.871634 2978 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:38:10.871721 kubelet[2978]: E0430 00:38:10.871695 2978 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-2b660cb835?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Apr 30 00:38:10.876025 kubelet[2978]: I0430 00:38:10.875997 2978 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:38:10.876131 kubelet[2978]: I0430 00:38:10.876101 2978 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:38:10.876593 kubelet[2978]: W0430 00:38:10.876543 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.876593 kubelet[2978]: E0430 00:38:10.876591 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.878000 kubelet[2978]: E0430 00:38:10.877979 2978 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:38:10.878329 kubelet[2978]: I0430 00:38:10.878311 2978 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:38:10.899329 kubelet[2978]: I0430 00:38:10.899260 2978 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:38:10.902016 kubelet[2978]: I0430 00:38:10.901981 2978 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:38:10.902016 kubelet[2978]: I0430 00:38:10.902020 2978 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:38:10.902109 kubelet[2978]: I0430 00:38:10.902044 2978 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:38:10.902109 kubelet[2978]: E0430 00:38:10.902082 2978 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:38:10.903412 kubelet[2978]: W0430 00:38:10.903333 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.903412 kubelet[2978]: E0430 00:38:10.903403 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:10.907573 kubelet[2978]: I0430 00:38:10.907494 2978 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:38:10.907573 kubelet[2978]: I0430 00:38:10.907540 2978 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:38:10.907573 kubelet[2978]: I0430 00:38:10.907578 2978 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:10.914578 kubelet[2978]: I0430 00:38:10.914552 2978 policy_none.go:49] "None policy: Start" Apr 30 00:38:10.915136 kubelet[2978]: I0430 00:38:10.915112 2978 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:38:10.915136 kubelet[2978]: I0430 00:38:10.915140 2978 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:38:10.922108 kubelet[2978]: I0430 00:38:10.922080 2978 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:38:10.922300 kubelet[2978]: I0430 00:38:10.922263 2978 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:38:10.922388 kubelet[2978]: I0430 00:38:10.922357 2978 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:38:10.926258 kubelet[2978]: E0430 00:38:10.926219 2978 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-2b660cb835\" not found" Apr 30 00:38:10.973090 kubelet[2978]: I0430 00:38:10.973056 2978 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:10.973402 kubelet[2978]: E0430 00:38:10.973359 2978 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.002775 kubelet[2978]: I0430 00:38:11.002680 2978 topology_manager.go:215] "Topology Admit Handler" podUID="df531184f96ea9ee16ee5472218b7517" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.004318 kubelet[2978]: I0430 00:38:11.004214 2978 topology_manager.go:215] "Topology Admit Handler" podUID="199308127255611b1e23e12e15cdc5c3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.005671 kubelet[2978]: I0430 00:38:11.005645 2978 topology_manager.go:215] "Topology Admit Handler" podUID="4c1673ee0690663437d24be1fc1ab4b2" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.072607 kubelet[2978]: I0430 00:38:11.072478 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.072607 kubelet[2978]: E0430 00:38:11.072490 2978 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-2b660cb835?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Apr 30 00:38:11.072607 kubelet[2978]: I0430 00:38:11.072510 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df531184f96ea9ee16ee5472218b7517-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-2b660cb835\" (UID: \"df531184f96ea9ee16ee5472218b7517\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.072607 kubelet[2978]: I0430 00:38:11.072528 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df531184f96ea9ee16ee5472218b7517-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-2b660cb835\" (UID: \"df531184f96ea9ee16ee5472218b7517\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.072607 kubelet[2978]: I0430 00:38:11.072543 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df531184f96ea9ee16ee5472218b7517-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-2b660cb835\" (UID: \"df531184f96ea9ee16ee5472218b7517\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.072826 kubelet[2978]: I0430 00:38:11.072563 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.173773 kubelet[2978]: I0430 00:38:11.173003 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.173773 kubelet[2978]: I0430 00:38:11.173045 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.173773 kubelet[2978]: I0430 00:38:11.173062 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c1673ee0690663437d24be1fc1ab4b2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-2b660cb835\" (UID: \"4c1673ee0690663437d24be1fc1ab4b2\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.173773 kubelet[2978]: I0430 00:38:11.173118 2978 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.176003 kubelet[2978]: I0430 00:38:11.175744 2978 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.176198 kubelet[2978]: E0430 00:38:11.176166 2978 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.309950 containerd[1816]: time="2025-04-30T00:38:11.309911155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-2b660cb835,Uid:df531184f96ea9ee16ee5472218b7517,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:11.311304 containerd[1816]: time="2025-04-30T00:38:11.311254512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-2b660cb835,Uid:199308127255611b1e23e12e15cdc5c3,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:11.311782 containerd[1816]: time="2025-04-30T00:38:11.311753591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-2b660cb835,Uid:4c1673ee0690663437d24be1fc1ab4b2,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:11.473384 kubelet[2978]: E0430 00:38:11.473323 2978 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-2b660cb835?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Apr 30 00:38:11.578430 kubelet[2978]: I0430 00:38:11.578396 2978 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.578720 kubelet[2978]: E0430 00:38:11.578690 2978 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:11.859170 kubelet[2978]: W0430 00:38:11.859002 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-2b660cb835&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:11.859170 kubelet[2978]: E0430 00:38:11.859064 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-2b660cb835&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.015584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752536610.mount: Deactivated successfully. Apr 30 00:38:12.054766 containerd[1816]: time="2025-04-30T00:38:12.054715982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:12.059016 containerd[1816]: time="2025-04-30T00:38:12.058982612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 00:38:12.063423 containerd[1816]: time="2025-04-30T00:38:12.063352921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:12.070402 containerd[1816]: time="2025-04-30T00:38:12.069662105Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:12.073423 containerd[1816]: time="2025-04-30T00:38:12.073382896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:38:12.078393 containerd[1816]: time="2025-04-30T00:38:12.078012604Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:12.081382 containerd[1816]: time="2025-04-30T00:38:12.081148477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:38:12.087158 containerd[1816]: time="2025-04-30T00:38:12.087128942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:12.087910 containerd[1816]: time="2025-04-30T00:38:12.087875660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 776.566628ms" Apr 30 00:38:12.088964 containerd[1816]: time="2025-04-30T00:38:12.088933937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 777.128266ms" Apr 30 00:38:12.103007 containerd[1816]: time="2025-04-30T00:38:12.102956542Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 792.970347ms" Apr 30 00:38:12.151058 kubelet[2978]: W0430 00:38:12.150961 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.151058 kubelet[2978]: E0430 00:38:12.151003 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.163393 kubelet[2978]: W0430 00:38:12.163326 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.163451 kubelet[2978]: E0430 00:38:12.163400 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.274484 kubelet[2978]: E0430 00:38:12.274440 2978 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-2b660cb835?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="1.6s" Apr 30 00:38:12.381152 kubelet[2978]: I0430 00:38:12.381096 2978 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:12.381433 kubelet[2978]: E0430 00:38:12.381407 2978 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:12.402823 kubelet[2978]: W0430 00:38:12.402748 2978 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.402823 kubelet[2978]: E0430 00:38:12.402797 2978 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.862426 kubelet[2978]: E0430 00:38:12.862396 2978 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.17:6443: connect: connection refused Apr 30 00:38:12.921784 containerd[1816]: time="2025-04-30T00:38:12.921596146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:12.921784 containerd[1816]: time="2025-04-30T00:38:12.921646946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:12.921784 containerd[1816]: time="2025-04-30T00:38:12.921661506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:12.921784 containerd[1816]: time="2025-04-30T00:38:12.921741585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:12.927358 containerd[1816]: time="2025-04-30T00:38:12.926736533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:12.927358 containerd[1816]: time="2025-04-30T00:38:12.926783293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:12.927358 containerd[1816]: time="2025-04-30T00:38:12.926803653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:12.927358 containerd[1816]: time="2025-04-30T00:38:12.926890933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:12.930859 containerd[1816]: time="2025-04-30T00:38:12.930736643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:12.930953 containerd[1816]: time="2025-04-30T00:38:12.930869883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:12.930953 containerd[1816]: time="2025-04-30T00:38:12.930885723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:12.931298 containerd[1816]: time="2025-04-30T00:38:12.931247962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:12.983205 containerd[1816]: time="2025-04-30T00:38:12.982935713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-2b660cb835,Uid:df531184f96ea9ee16ee5472218b7517,Namespace:kube-system,Attempt:0,} returns sandbox id \"d800822ab9ad14ffb94824efa2368516ecce05a763fd6aed742fc559bb899e8a\"" Apr 30 00:38:12.990273 containerd[1816]: time="2025-04-30T00:38:12.990236455Z" level=info msg="CreateContainer within sandbox \"d800822ab9ad14ffb94824efa2368516ecce05a763fd6aed742fc559bb899e8a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:38:12.991936 containerd[1816]: time="2025-04-30T00:38:12.991533452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-2b660cb835,Uid:199308127255611b1e23e12e15cdc5c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6e77adac19f87159638d755e53867a10a7fd235e3a81e94b062deac6dfbc72e\"" Apr 30 00:38:12.999190 containerd[1816]: time="2025-04-30T00:38:12.999152233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-2b660cb835,Uid:4c1673ee0690663437d24be1fc1ab4b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a025e98d737b16a0cb1ab03cc069eeb65d72a82e588690661ff92e7e472ad70\"" Apr 30 00:38:12.999545 containerd[1816]: time="2025-04-30T00:38:12.999510832Z" level=info msg="CreateContainer within sandbox \"e6e77adac19f87159638d755e53867a10a7fd235e3a81e94b062deac6dfbc72e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:38:13.001943 containerd[1816]: time="2025-04-30T00:38:13.001907906Z" level=info msg="CreateContainer within sandbox \"1a025e98d737b16a0cb1ab03cc069eeb65d72a82e588690661ff92e7e472ad70\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:38:13.093469 containerd[1816]: time="2025-04-30T00:38:13.093430158Z" level=info msg="CreateContainer within sandbox \"d800822ab9ad14ffb94824efa2368516ecce05a763fd6aed742fc559bb899e8a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57e940e80d9d84be415ccf89e6698b258002930df14cb68f8b968b5f540ea353\"" Apr 30 00:38:13.094252 containerd[1816]: time="2025-04-30T00:38:13.094231076Z" level=info msg="StartContainer for \"57e940e80d9d84be415ccf89e6698b258002930df14cb68f8b968b5f540ea353\"" Apr 30 00:38:13.107787 containerd[1816]: time="2025-04-30T00:38:13.107739683Z" level=info msg="CreateContainer within sandbox \"e6e77adac19f87159638d755e53867a10a7fd235e3a81e94b062deac6dfbc72e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aba13e59e18c055a624f08834d90d86a2b0f94f1e5156c113050eb3273d73fc4\"" Apr 30 00:38:13.108985 containerd[1816]: time="2025-04-30T00:38:13.108445041Z" level=info msg="StartContainer for \"aba13e59e18c055a624f08834d90d86a2b0f94f1e5156c113050eb3273d73fc4\"" Apr 30 00:38:13.116267 containerd[1816]: time="2025-04-30T00:38:13.116172382Z" level=info msg="CreateContainer within sandbox \"1a025e98d737b16a0cb1ab03cc069eeb65d72a82e588690661ff92e7e472ad70\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8be18aa337f5c80b312a0029bbc1b5cdcfc45b58e501566611315f7902bd5039\"" Apr 30 00:38:13.117962 containerd[1816]: time="2025-04-30T00:38:13.117928737Z" level=info msg="StartContainer for \"8be18aa337f5c80b312a0029bbc1b5cdcfc45b58e501566611315f7902bd5039\"" Apr 30 00:38:13.164883 containerd[1816]: time="2025-04-30T00:38:13.164844821Z" level=info msg="StartContainer for \"57e940e80d9d84be415ccf89e6698b258002930df14cb68f8b968b5f540ea353\" returns successfully" Apr 30 00:38:13.211057 containerd[1816]: time="2025-04-30T00:38:13.210538147Z" level=info msg="StartContainer for \"8be18aa337f5c80b312a0029bbc1b5cdcfc45b58e501566611315f7902bd5039\" returns successfully" Apr 30 00:38:13.231646 containerd[1816]: time="2025-04-30T00:38:13.231595135Z" level=info msg="StartContainer for \"aba13e59e18c055a624f08834d90d86a2b0f94f1e5156c113050eb3273d73fc4\" returns successfully" Apr 30 00:38:13.984332 kubelet[2978]: I0430 00:38:13.984302 2978 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:15.542356 kubelet[2978]: E0430 00:38:15.542183 2978 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-2b660cb835\" not found" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:15.596465 kubelet[2978]: I0430 00:38:15.596290 2978 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:15.870353 kubelet[2978]: I0430 00:38:15.870246 2978 apiserver.go:52] "Watching apiserver" Apr 30 00:38:15.972718 kubelet[2978]: I0430 00:38:15.972681 2978 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:38:17.736332 systemd[1]: Reloading requested from client PID 3253 ('systemctl') (unit session-9.scope)... Apr 30 00:38:17.736348 systemd[1]: Reloading... Apr 30 00:38:17.824390 zram_generator::config[3296]: No configuration found. Apr 30 00:38:17.934113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:38:18.017980 systemd[1]: Reloading finished in 281 ms. Apr 30 00:38:18.046607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:18.047170 kubelet[2978]: I0430 00:38:18.046664 2978 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:38:18.062459 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:38:18.062782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:18.068808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:18.483609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:18.485302 (kubelet)[3367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:38:18.536228 kubelet[3367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:18.536228 kubelet[3367]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:38:18.536228 kubelet[3367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:18.536600 kubelet[3367]: I0430 00:38:18.536283 3367 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:38:18.540386 kubelet[3367]: I0430 00:38:18.540184 3367 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:38:18.540386 kubelet[3367]: I0430 00:38:18.540205 3367 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:38:18.540500 kubelet[3367]: I0430 00:38:18.540413 3367 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:38:18.542896 kubelet[3367]: I0430 00:38:18.542869 3367 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:38:18.546408 kubelet[3367]: I0430 00:38:18.545991 3367 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:38:18.558585 kubelet[3367]: I0430 00:38:18.558560 3367 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:38:18.559317 kubelet[3367]: I0430 00:38:18.559290 3367 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:38:18.559678 kubelet[3367]: I0430 00:38:18.559440 3367 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-2b660cb835","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:38:18.559984 kubelet[3367]: I0430 00:38:18.559841 3367 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:38:18.559984 kubelet[3367]: I0430 00:38:18.559860 3367 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:38:18.559984 kubelet[3367]: I0430 00:38:18.559896 3367 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:18.560114 kubelet[3367]: I0430 00:38:18.560104 3367 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:38:18.560595 kubelet[3367]: I0430 00:38:18.560583 3367 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:38:18.560688 kubelet[3367]: I0430 00:38:18.560679 3367 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:38:18.560758 kubelet[3367]: I0430 00:38:18.560749 3367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:38:18.561825 kubelet[3367]: I0430 00:38:18.561808 3367 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:38:18.562053 kubelet[3367]: I0430 00:38:18.562041 3367 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:38:18.562518 kubelet[3367]: I0430 00:38:18.562503 3367 server.go:1264] "Started kubelet" Apr 30 00:38:18.564211 kubelet[3367]: I0430 00:38:18.564179 3367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:38:18.568855 kubelet[3367]: I0430 00:38:18.568828 3367 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:38:18.569810 kubelet[3367]: I0430 00:38:18.569793 3367 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:38:18.570935 kubelet[3367]: I0430 00:38:18.570891 3367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:38:18.571154 kubelet[3367]: I0430 00:38:18.571140 3367 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:38:18.572290 kubelet[3367]: I0430 00:38:18.572274 3367 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:38:18.573709 kubelet[3367]: I0430 00:38:18.573692 3367 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:38:18.573914 kubelet[3367]: I0430 00:38:18.573903 3367 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:38:18.575401 kubelet[3367]: I0430 00:38:18.575344 3367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:38:18.577006 kubelet[3367]: I0430 00:38:18.576968 3367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:38:18.577133 kubelet[3367]: I0430 00:38:18.577123 3367 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:38:18.577213 kubelet[3367]: I0430 00:38:18.577204 3367 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:38:18.577392 kubelet[3367]: E0430 00:38:18.577342 3367 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:38:18.591393 kubelet[3367]: I0430 00:38:18.591330 3367 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:38:18.591393 kubelet[3367]: I0430 00:38:18.591346 3367 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:38:18.591664 kubelet[3367]: I0430 00:38:18.591520 3367 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:38:18.678433 kubelet[3367]: E0430 00:38:18.678054 3367 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:38:18.687512 kubelet[3367]: I0430 00:38:18.687482 3367 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.712251 kubelet[3367]: I0430 00:38:18.712219 3367 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:38:18.713555 kubelet[3367]: I0430 00:38:18.713535 3367 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:38:18.714226 kubelet[3367]: I0430 00:38:18.713797 3367 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:18.714226 kubelet[3367]: I0430 00:38:18.713821 3367 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.714927 kubelet[3367]: I0430 00:38:18.714392 3367 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.714927 kubelet[3367]: I0430 00:38:18.714485 3367 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:38:18.714927 kubelet[3367]: I0430 00:38:18.714499 3367 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:38:18.714927 kubelet[3367]: I0430 00:38:18.714518 3367 policy_none.go:49] "None policy: Start" Apr 30 00:38:18.717084 kubelet[3367]: I0430 00:38:18.716579 3367 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:38:18.717084 kubelet[3367]: I0430 00:38:18.716619 3367 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:38:18.717084 kubelet[3367]: I0430 00:38:18.716801 3367 state_mem.go:75] "Updated machine memory state" Apr 30 00:38:18.718714 kubelet[3367]: I0430 00:38:18.718685 3367 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:38:18.719277 kubelet[3367]: I0430 00:38:18.718907 3367 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:38:18.719277 kubelet[3367]: I0430 00:38:18.718992 3367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:38:18.878620 kubelet[3367]: I0430 00:38:18.878505 3367 topology_manager.go:215] "Topology Admit Handler" podUID="df531184f96ea9ee16ee5472218b7517" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.878724 kubelet[3367]: I0430 00:38:18.878626 3367 topology_manager.go:215] "Topology Admit Handler" podUID="199308127255611b1e23e12e15cdc5c3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.878724 kubelet[3367]: I0430 00:38:18.878668 3367 topology_manager.go:215] "Topology Admit Handler" podUID="4c1673ee0690663437d24be1fc1ab4b2" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.893217 kubelet[3367]: W0430 00:38:18.892788 3367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:38:18.897253 kubelet[3367]: W0430 00:38:18.896727 3367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:38:18.897253 kubelet[3367]: W0430 00:38:18.896833 3367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:38:18.982031 kubelet[3367]: I0430 00:38:18.981987 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df531184f96ea9ee16ee5472218b7517-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-2b660cb835\" (UID: \"df531184f96ea9ee16ee5472218b7517\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982031 kubelet[3367]: I0430 00:38:18.982036 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df531184f96ea9ee16ee5472218b7517-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-2b660cb835\" (UID: \"df531184f96ea9ee16ee5472218b7517\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982031 kubelet[3367]: I0430 00:38:18.982062 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982031 kubelet[3367]: I0430 00:38:18.982082 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982031 kubelet[3367]: I0430 00:38:18.982101 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c1673ee0690663437d24be1fc1ab4b2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-2b660cb835\" (UID: \"4c1673ee0690663437d24be1fc1ab4b2\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982391 kubelet[3367]: I0430 00:38:18.982122 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df531184f96ea9ee16ee5472218b7517-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-2b660cb835\" (UID: \"df531184f96ea9ee16ee5472218b7517\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982391 kubelet[3367]: I0430 00:38:18.982137 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982391 kubelet[3367]: I0430 00:38:18.982152 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:18.982391 kubelet[3367]: I0430 00:38:18.982169 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/199308127255611b1e23e12e15cdc5c3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-2b660cb835\" (UID: \"199308127255611b1e23e12e15cdc5c3\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" Apr 30 00:38:19.562000 kubelet[3367]: I0430 00:38:19.561773 3367 apiserver.go:52] "Watching apiserver" Apr 30 00:38:19.576079 kubelet[3367]: I0430 00:38:19.576014 3367 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:38:19.704321 kubelet[3367]: I0430 00:38:19.704099 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-2b660cb835" podStartSLOduration=1.7040797570000001 podStartE2EDuration="1.704079757s" podCreationTimestamp="2025-04-30 00:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:19.682295666 +0000 UTC m=+1.193568663" watchObservedRunningTime="2025-04-30 00:38:19.704079757 +0000 UTC m=+1.215352754" Apr 30 00:38:19.733763 kubelet[3367]: I0430 00:38:19.733690 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-2b660cb835" podStartSLOduration=1.733396784 podStartE2EDuration="1.733396784s" podCreationTimestamp="2025-04-30 00:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:19.704235997 +0000 UTC m=+1.215508994" watchObservedRunningTime="2025-04-30 00:38:19.733396784 +0000 UTC m=+1.244669781" Apr 30 00:38:19.755613 kubelet[3367]: I0430 00:38:19.754708 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-2b660cb835" podStartSLOduration=1.754692637 podStartE2EDuration="1.754692637s" podCreationTimestamp="2025-04-30 00:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:19.733833663 +0000 UTC m=+1.245106620" watchObservedRunningTime="2025-04-30 00:38:19.754692637 +0000 UTC m=+1.265965634" Apr 30 00:38:23.398162 sudo[2343]: pam_unix(sudo:session): session closed for user root Apr 30 00:38:23.469594 sshd[2339]: pam_unix(sshd:session): session closed for user core Apr 30 00:38:23.473468 systemd[1]: sshd@6-10.200.20.17:22-10.200.16.10:56818.service: Deactivated successfully. Apr 30 00:38:23.473872 systemd-logind[1748]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:38:23.477728 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:38:23.479568 systemd-logind[1748]: Removed session 9. Apr 30 00:38:31.907563 kubelet[3367]: I0430 00:38:31.907522 3367 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:38:31.908707 containerd[1816]: time="2025-04-30T00:38:31.907845210Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:38:31.910715 kubelet[3367]: I0430 00:38:31.910480 3367 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:38:32.408594 kubelet[3367]: I0430 00:38:32.408519 3367 topology_manager.go:215] "Topology Admit Handler" podUID="88073e1f-8408-4f5b-a5c6-2915c6c192c1" podNamespace="kube-system" podName="kube-proxy-pz96h" Apr 30 00:38:32.461520 kubelet[3367]: I0430 00:38:32.461455 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88073e1f-8408-4f5b-a5c6-2915c6c192c1-kube-proxy\") pod \"kube-proxy-pz96h\" (UID: \"88073e1f-8408-4f5b-a5c6-2915c6c192c1\") " pod="kube-system/kube-proxy-pz96h" Apr 30 00:38:32.461794 kubelet[3367]: I0430 00:38:32.461599 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88073e1f-8408-4f5b-a5c6-2915c6c192c1-xtables-lock\") pod \"kube-proxy-pz96h\" (UID: \"88073e1f-8408-4f5b-a5c6-2915c6c192c1\") " pod="kube-system/kube-proxy-pz96h" Apr 30 00:38:32.461794 kubelet[3367]: I0430 00:38:32.461624 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88073e1f-8408-4f5b-a5c6-2915c6c192c1-lib-modules\") pod \"kube-proxy-pz96h\" (UID: \"88073e1f-8408-4f5b-a5c6-2915c6c192c1\") " pod="kube-system/kube-proxy-pz96h" Apr 30 00:38:32.461794 kubelet[3367]: I0430 00:38:32.461656 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vh7\" (UniqueName: \"kubernetes.io/projected/88073e1f-8408-4f5b-a5c6-2915c6c192c1-kube-api-access-b5vh7\") pod \"kube-proxy-pz96h\" (UID: \"88073e1f-8408-4f5b-a5c6-2915c6c192c1\") " pod="kube-system/kube-proxy-pz96h" Apr 30 00:38:32.569839 kubelet[3367]: E0430 00:38:32.569805 3367 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 00:38:32.569839 kubelet[3367]: E0430 00:38:32.569832 3367 projected.go:200] Error preparing data for projected volume kube-api-access-b5vh7 for pod kube-system/kube-proxy-pz96h: configmap "kube-root-ca.crt" not found Apr 30 00:38:32.570012 kubelet[3367]: E0430 00:38:32.569915 3367 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88073e1f-8408-4f5b-a5c6-2915c6c192c1-kube-api-access-b5vh7 podName:88073e1f-8408-4f5b-a5c6-2915c6c192c1 nodeName:}" failed. No retries permitted until 2025-04-30 00:38:33.069897063 +0000 UTC m=+14.581170060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5vh7" (UniqueName: "kubernetes.io/projected/88073e1f-8408-4f5b-a5c6-2915c6c192c1-kube-api-access-b5vh7") pod "kube-proxy-pz96h" (UID: "88073e1f-8408-4f5b-a5c6-2915c6c192c1") : configmap "kube-root-ca.crt" not found Apr 30 00:38:32.988159 kubelet[3367]: I0430 00:38:32.988109 3367 topology_manager.go:215] "Topology Admit Handler" podUID="060cf5db-8768-448b-9ffa-1441da62279b" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-tlc6w" Apr 30 00:38:33.064974 kubelet[3367]: I0430 00:38:33.064870 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft92r\" (UniqueName: \"kubernetes.io/projected/060cf5db-8768-448b-9ffa-1441da62279b-kube-api-access-ft92r\") pod \"tigera-operator-797db67f8-tlc6w\" (UID: \"060cf5db-8768-448b-9ffa-1441da62279b\") " pod="tigera-operator/tigera-operator-797db67f8-tlc6w" Apr 30 00:38:33.064974 kubelet[3367]: I0430 00:38:33.064911 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/060cf5db-8768-448b-9ffa-1441da62279b-var-lib-calico\") pod \"tigera-operator-797db67f8-tlc6w\" (UID: \"060cf5db-8768-448b-9ffa-1441da62279b\") " pod="tigera-operator/tigera-operator-797db67f8-tlc6w" Apr 30 00:38:33.294943 containerd[1816]: time="2025-04-30T00:38:33.294810559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-tlc6w,Uid:060cf5db-8768-448b-9ffa-1441da62279b,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:38:33.312460 containerd[1816]: time="2025-04-30T00:38:33.312426264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pz96h,Uid:88073e1f-8408-4f5b-a5c6-2915c6c192c1,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:33.371504 containerd[1816]: time="2025-04-30T00:38:33.371424319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:33.371504 containerd[1816]: time="2025-04-30T00:38:33.371474479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:33.371662 containerd[1816]: time="2025-04-30T00:38:33.371489999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:33.371926 containerd[1816]: time="2025-04-30T00:38:33.371794918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:33.394911 containerd[1816]: time="2025-04-30T00:38:33.394840246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:33.395103 containerd[1816]: time="2025-04-30T00:38:33.394890126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:33.395103 containerd[1816]: time="2025-04-30T00:38:33.394911846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:33.395103 containerd[1816]: time="2025-04-30T00:38:33.394995366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:33.420942 containerd[1816]: time="2025-04-30T00:38:33.420870965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-tlc6w,Uid:060cf5db-8768-448b-9ffa-1441da62279b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a27a59e8402f06b103aced616cd1d83ab0d010fd73ca70add1ee65d9b3686729\"" Apr 30 00:38:33.426077 containerd[1816]: time="2025-04-30T00:38:33.425964309Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:38:33.432101 containerd[1816]: time="2025-04-30T00:38:33.432030490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pz96h,Uid:88073e1f-8408-4f5b-a5c6-2915c6c192c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"784a596f3ab7bfbd3d62d27e060fe736a78669c5bc8511eb14927951f6405092\"" Apr 30 00:38:33.434830 containerd[1816]: time="2025-04-30T00:38:33.434719082Z" level=info msg="CreateContainer within sandbox \"784a596f3ab7bfbd3d62d27e060fe736a78669c5bc8511eb14927951f6405092\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:38:33.504642 containerd[1816]: time="2025-04-30T00:38:33.504565663Z" level=info msg="CreateContainer within sandbox \"784a596f3ab7bfbd3d62d27e060fe736a78669c5bc8511eb14927951f6405092\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84de684947b373e807541fd31c9d1a50559da2e8914043c262213f14f26b03d5\"" Apr 30 00:38:33.505391 containerd[1816]: time="2025-04-30T00:38:33.505043582Z" level=info msg="StartContainer for \"84de684947b373e807541fd31c9d1a50559da2e8914043c262213f14f26b03d5\"" Apr 30 00:38:33.555992 containerd[1816]: time="2025-04-30T00:38:33.555756144Z" level=info msg="StartContainer for \"84de684947b373e807541fd31c9d1a50559da2e8914043c262213f14f26b03d5\" returns successfully" Apr 30 00:38:33.689385 kubelet[3367]: I0430 00:38:33.689123 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pz96h" podStartSLOduration=1.689105847 podStartE2EDuration="1.689105847s" podCreationTimestamp="2025-04-30 00:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:33.689083927 +0000 UTC m=+15.200356924" watchObservedRunningTime="2025-04-30 00:38:33.689105847 +0000 UTC m=+15.200378844" Apr 30 00:38:35.592906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938818215.mount: Deactivated successfully. Apr 30 00:38:40.705570 containerd[1816]: time="2025-04-30T00:38:40.705524206Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:40.709206 containerd[1816]: time="2025-04-30T00:38:40.709174115Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:38:40.712940 containerd[1816]: time="2025-04-30T00:38:40.712886704Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:40.722013 containerd[1816]: time="2025-04-30T00:38:40.721981076Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:40.723093 containerd[1816]: time="2025-04-30T00:38:40.722487514Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 7.296044847s" Apr 30 00:38:40.723093 containerd[1816]: time="2025-04-30T00:38:40.722517954Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:38:40.725182 containerd[1816]: time="2025-04-30T00:38:40.725087706Z" level=info msg="CreateContainer within sandbox \"a27a59e8402f06b103aced616cd1d83ab0d010fd73ca70add1ee65d9b3686729\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:38:40.767624 containerd[1816]: time="2025-04-30T00:38:40.767581256Z" level=info msg="CreateContainer within sandbox \"a27a59e8402f06b103aced616cd1d83ab0d010fd73ca70add1ee65d9b3686729\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b873552339b1ec44ff2af5e5cc66dc9c71861264ab5d906f6f7537ae7a871d47\"" Apr 30 00:38:40.768383 containerd[1816]: time="2025-04-30T00:38:40.768170894Z" level=info msg="StartContainer for \"b873552339b1ec44ff2af5e5cc66dc9c71861264ab5d906f6f7537ae7a871d47\"" Apr 30 00:38:40.820088 containerd[1816]: time="2025-04-30T00:38:40.819973135Z" level=info msg="StartContainer for \"b873552339b1ec44ff2af5e5cc66dc9c71861264ab5d906f6f7537ae7a871d47\" returns successfully" Apr 30 00:38:41.702634 kubelet[3367]: I0430 00:38:41.702149 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-tlc6w" podStartSLOduration=2.402868351 podStartE2EDuration="9.702134628s" podCreationTimestamp="2025-04-30 00:38:32 +0000 UTC" firstStartedPulling="2025-04-30 00:38:33.423873875 +0000 UTC m=+14.935146832" lastFinishedPulling="2025-04-30 00:38:40.723140152 +0000 UTC m=+22.234413109" observedRunningTime="2025-04-30 00:38:41.701793669 +0000 UTC m=+23.213066666" watchObservedRunningTime="2025-04-30 00:38:41.702134628 +0000 UTC m=+23.213407665" Apr 30 00:38:44.866317 kubelet[3367]: I0430 00:38:44.866267 3367 topology_manager.go:215] "Topology Admit Handler" podUID="e7fc6476-2f37-469e-993e-94b38b9dd4ef" podNamespace="calico-system" podName="calico-typha-84846cfc59-27fhj" Apr 30 00:38:44.931020 kubelet[3367]: I0430 00:38:44.930971 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e7fc6476-2f37-469e-993e-94b38b9dd4ef-typha-certs\") pod \"calico-typha-84846cfc59-27fhj\" (UID: \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\") " pod="calico-system/calico-typha-84846cfc59-27fhj" Apr 30 00:38:44.931156 kubelet[3367]: I0430 00:38:44.931089 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7fc6476-2f37-469e-993e-94b38b9dd4ef-tigera-ca-bundle\") pod \"calico-typha-84846cfc59-27fhj\" (UID: \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\") " pod="calico-system/calico-typha-84846cfc59-27fhj" Apr 30 00:38:44.931156 kubelet[3367]: I0430 00:38:44.931115 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xljj2\" (UniqueName: \"kubernetes.io/projected/e7fc6476-2f37-469e-993e-94b38b9dd4ef-kube-api-access-xljj2\") pod \"calico-typha-84846cfc59-27fhj\" (UID: \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\") " pod="calico-system/calico-typha-84846cfc59-27fhj" Apr 30 00:38:44.992829 kubelet[3367]: I0430 00:38:44.992770 3367 topology_manager.go:215] "Topology Admit Handler" podUID="facb84c5-8830-48c5-8675-03a9219d0eb7" podNamespace="calico-system" podName="calico-node-v8xnm" Apr 30 00:38:45.031469 kubelet[3367]: I0430 00:38:45.031435 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-run-calico\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031469 kubelet[3367]: I0430 00:38:45.031470 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-bin-dir\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031620 kubelet[3367]: I0430 00:38:45.031490 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-lib-modules\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031620 kubelet[3367]: I0430 00:38:45.031508 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-lib-calico\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031620 kubelet[3367]: I0430 00:38:45.031555 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-xtables-lock\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031620 kubelet[3367]: I0430 00:38:45.031570 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/facb84c5-8830-48c5-8675-03a9219d0eb7-tigera-ca-bundle\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031620 kubelet[3367]: I0430 00:38:45.031584 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/facb84c5-8830-48c5-8675-03a9219d0eb7-node-certs\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031736 kubelet[3367]: I0430 00:38:45.031598 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-net-dir\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031736 kubelet[3367]: I0430 00:38:45.031614 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-policysync\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031736 kubelet[3367]: I0430 00:38:45.031630 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsg5f\" (UniqueName: \"kubernetes.io/projected/facb84c5-8830-48c5-8675-03a9219d0eb7-kube-api-access-nsg5f\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031736 kubelet[3367]: I0430 00:38:45.031657 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-log-dir\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.031736 kubelet[3367]: I0430 00:38:45.031672 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-flexvol-driver-host\") pod \"calico-node-v8xnm\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " pod="calico-system/calico-node-v8xnm" Apr 30 00:38:45.124191 kubelet[3367]: I0430 00:38:45.124018 3367 topology_manager.go:215] "Topology Admit Handler" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" podNamespace="calico-system" podName="csi-node-driver-tttw8" Apr 30 00:38:45.126341 kubelet[3367]: E0430 00:38:45.124950 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttw8" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" Apr 30 00:38:45.142400 kubelet[3367]: E0430 00:38:45.140836 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.142400 kubelet[3367]: W0430 00:38:45.140859 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.142400 kubelet[3367]: E0430 00:38:45.140874 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.143890 kubelet[3367]: E0430 00:38:45.143828 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.143890 kubelet[3367]: W0430 00:38:45.143844 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.143890 kubelet[3367]: E0430 00:38:45.143855 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.156875 kubelet[3367]: E0430 00:38:45.156859 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.156997 kubelet[3367]: W0430 00:38:45.156956 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.156997 kubelet[3367]: E0430 00:38:45.156973 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.175060 containerd[1816]: time="2025-04-30T00:38:45.175027947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84846cfc59-27fhj,Uid:e7fc6476-2f37-469e-993e-94b38b9dd4ef,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:45.217708 kubelet[3367]: E0430 00:38:45.217457 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.217708 kubelet[3367]: W0430 00:38:45.217703 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.218019 kubelet[3367]: E0430 00:38:45.217727 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.220118 kubelet[3367]: E0430 00:38:45.219638 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.220118 kubelet[3367]: W0430 00:38:45.219656 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.220118 kubelet[3367]: E0430 00:38:45.219668 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.221499 kubelet[3367]: E0430 00:38:45.221478 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.221499 kubelet[3367]: W0430 00:38:45.221494 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.221590 kubelet[3367]: E0430 00:38:45.221506 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.221973 kubelet[3367]: E0430 00:38:45.221931 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.221973 kubelet[3367]: W0430 00:38:45.221948 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.222066 kubelet[3367]: E0430 00:38:45.221960 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.222471 kubelet[3367]: E0430 00:38:45.222453 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.222471 kubelet[3367]: W0430 00:38:45.222468 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.222544 kubelet[3367]: E0430 00:38:45.222479 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.222671 kubelet[3367]: E0430 00:38:45.222654 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.222671 kubelet[3367]: W0430 00:38:45.222665 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.222731 kubelet[3367]: E0430 00:38:45.222673 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.222849 kubelet[3367]: E0430 00:38:45.222834 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.222849 kubelet[3367]: W0430 00:38:45.222844 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.222904 kubelet[3367]: E0430 00:38:45.222852 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.223031 kubelet[3367]: E0430 00:38:45.223014 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.223031 kubelet[3367]: W0430 00:38:45.223026 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.223090 kubelet[3367]: E0430 00:38:45.223034 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.223229 kubelet[3367]: E0430 00:38:45.223214 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.223229 kubelet[3367]: W0430 00:38:45.223226 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.223282 kubelet[3367]: E0430 00:38:45.223234 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.223437 kubelet[3367]: E0430 00:38:45.223421 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.223437 kubelet[3367]: W0430 00:38:45.223434 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.223499 kubelet[3367]: E0430 00:38:45.223443 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.223634 kubelet[3367]: E0430 00:38:45.223617 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.223664 kubelet[3367]: W0430 00:38:45.223633 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.223664 kubelet[3367]: E0430 00:38:45.223649 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.223813 kubelet[3367]: E0430 00:38:45.223797 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.223813 kubelet[3367]: W0430 00:38:45.223809 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.223872 kubelet[3367]: E0430 00:38:45.223816 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.227370 kubelet[3367]: E0430 00:38:45.224435 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.227370 kubelet[3367]: W0430 00:38:45.224463 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.227370 kubelet[3367]: E0430 00:38:45.224475 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.238376 kubelet[3367]: E0430 00:38:45.237574 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.238462 kubelet[3367]: W0430 00:38:45.238379 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.238462 kubelet[3367]: E0430 00:38:45.238401 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.238639 kubelet[3367]: E0430 00:38:45.238619 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.238639 kubelet[3367]: W0430 00:38:45.238634 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.238716 kubelet[3367]: E0430 00:38:45.238644 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.239691 kubelet[3367]: E0430 00:38:45.239567 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.239691 kubelet[3367]: W0430 00:38:45.239586 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.239691 kubelet[3367]: E0430 00:38:45.239599 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.240550 kubelet[3367]: E0430 00:38:45.240527 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.240733 kubelet[3367]: W0430 00:38:45.240693 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.240733 kubelet[3367]: E0430 00:38:45.240713 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.241770 kubelet[3367]: E0430 00:38:45.241740 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.241770 kubelet[3367]: W0430 00:38:45.241761 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.241770 kubelet[3367]: E0430 00:38:45.241772 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.242659 kubelet[3367]: E0430 00:38:45.242637 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.242659 kubelet[3367]: W0430 00:38:45.242655 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.242997 kubelet[3367]: E0430 00:38:45.242669 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.243615 kubelet[3367]: E0430 00:38:45.243586 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.243615 kubelet[3367]: W0430 00:38:45.243602 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.243615 kubelet[3367]: E0430 00:38:45.243614 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.246465 kubelet[3367]: E0430 00:38:45.246435 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.246465 kubelet[3367]: W0430 00:38:45.246456 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.246465 kubelet[3367]: E0430 00:38:45.246468 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.246860 kubelet[3367]: I0430 00:38:45.246495 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb05636f-1a4f-4911-8c79-8727fa131132-kubelet-dir\") pod \"csi-node-driver-tttw8\" (UID: \"eb05636f-1a4f-4911-8c79-8727fa131132\") " pod="calico-system/csi-node-driver-tttw8" Apr 30 00:38:45.246860 kubelet[3367]: E0430 00:38:45.246667 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.246860 kubelet[3367]: W0430 00:38:45.246678 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.246860 kubelet[3367]: E0430 00:38:45.246687 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.246860 kubelet[3367]: I0430 00:38:45.246701 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb05636f-1a4f-4911-8c79-8727fa131132-registration-dir\") pod \"csi-node-driver-tttw8\" (UID: \"eb05636f-1a4f-4911-8c79-8727fa131132\") " pod="calico-system/csi-node-driver-tttw8" Apr 30 00:38:45.248462 kubelet[3367]: E0430 00:38:45.248438 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.248462 kubelet[3367]: W0430 00:38:45.248457 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.248558 kubelet[3367]: E0430 00:38:45.248478 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.248558 kubelet[3367]: I0430 00:38:45.248496 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22cq\" (UniqueName: \"kubernetes.io/projected/eb05636f-1a4f-4911-8c79-8727fa131132-kube-api-access-k22cq\") pod \"csi-node-driver-tttw8\" (UID: \"eb05636f-1a4f-4911-8c79-8727fa131132\") " pod="calico-system/csi-node-driver-tttw8" Apr 30 00:38:45.249105 kubelet[3367]: E0430 00:38:45.249088 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.249154 kubelet[3367]: W0430 00:38:45.249105 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.250416 kubelet[3367]: E0430 00:38:45.250389 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.250483 kubelet[3367]: I0430 00:38:45.250426 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb05636f-1a4f-4911-8c79-8727fa131132-socket-dir\") pod \"csi-node-driver-tttw8\" (UID: \"eb05636f-1a4f-4911-8c79-8727fa131132\") " pod="calico-system/csi-node-driver-tttw8" Apr 30 00:38:45.250511 kubelet[3367]: E0430 00:38:45.250496 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.250511 kubelet[3367]: W0430 00:38:45.250503 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.251385 kubelet[3367]: E0430 00:38:45.251208 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.251447 containerd[1816]: time="2025-04-30T00:38:45.251222273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:45.251447 containerd[1816]: time="2025-04-30T00:38:45.251268113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:45.251447 containerd[1816]: time="2025-04-30T00:38:45.251279553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:45.253102 kubelet[3367]: E0430 00:38:45.253045 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.253102 kubelet[3367]: W0430 00:38:45.253062 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.254430 kubelet[3367]: E0430 00:38:45.254393 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.254607 kubelet[3367]: E0430 00:38:45.254587 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.254607 kubelet[3367]: W0430 00:38:45.254601 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.254860 containerd[1816]: time="2025-04-30T00:38:45.254776063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:45.255549 kubelet[3367]: E0430 00:38:45.255522 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.255678 kubelet[3367]: E0430 00:38:45.255661 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.255678 kubelet[3367]: W0430 00:38:45.255674 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.255922 kubelet[3367]: E0430 00:38:45.255702 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.255922 kubelet[3367]: I0430 00:38:45.255729 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eb05636f-1a4f-4911-8c79-8727fa131132-varrun\") pod \"csi-node-driver-tttw8\" (UID: \"eb05636f-1a4f-4911-8c79-8727fa131132\") " pod="calico-system/csi-node-driver-tttw8" Apr 30 00:38:45.256607 kubelet[3367]: E0430 00:38:45.256514 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.256607 kubelet[3367]: W0430 00:38:45.256539 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.256607 kubelet[3367]: E0430 00:38:45.256574 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.258257 kubelet[3367]: E0430 00:38:45.258164 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.258257 kubelet[3367]: W0430 00:38:45.258215 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.258257 kubelet[3367]: E0430 00:38:45.258228 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.260454 kubelet[3367]: E0430 00:38:45.260431 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.260454 kubelet[3367]: W0430 00:38:45.260448 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.262321 kubelet[3367]: E0430 00:38:45.260476 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.262321 kubelet[3367]: E0430 00:38:45.261056 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.262321 kubelet[3367]: W0430 00:38:45.261070 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.262321 kubelet[3367]: E0430 00:38:45.261081 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.262602 kubelet[3367]: E0430 00:38:45.262584 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.262602 kubelet[3367]: W0430 00:38:45.262599 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.262683 kubelet[3367]: E0430 00:38:45.262610 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.262821 kubelet[3367]: E0430 00:38:45.262803 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.262821 kubelet[3367]: W0430 00:38:45.262816 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.262980 kubelet[3367]: E0430 00:38:45.262827 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.263432 kubelet[3367]: E0430 00:38:45.263411 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.263432 kubelet[3367]: W0430 00:38:45.263427 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.263539 kubelet[3367]: E0430 00:38:45.263437 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.300646 containerd[1816]: time="2025-04-30T00:38:45.300320763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v8xnm,Uid:facb84c5-8830-48c5-8675-03a9219d0eb7,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:45.307559 containerd[1816]: time="2025-04-30T00:38:45.307531581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84846cfc59-27fhj,Uid:e7fc6476-2f37-469e-993e-94b38b9dd4ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\"" Apr 30 00:38:45.310127 containerd[1816]: time="2025-04-30T00:38:45.310017773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:38:45.347298 containerd[1816]: time="2025-04-30T00:38:45.346857741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:45.347298 containerd[1816]: time="2025-04-30T00:38:45.346930820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:45.347298 containerd[1816]: time="2025-04-30T00:38:45.346952900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:45.347692 containerd[1816]: time="2025-04-30T00:38:45.347238020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:45.359005 kubelet[3367]: E0430 00:38:45.358867 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.359005 kubelet[3367]: W0430 00:38:45.358892 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.359005 kubelet[3367]: E0430 00:38:45.358913 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.359405 kubelet[3367]: E0430 00:38:45.359265 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.359405 kubelet[3367]: W0430 00:38:45.359278 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.359405 kubelet[3367]: E0430 00:38:45.359289 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.359810 kubelet[3367]: E0430 00:38:45.359709 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.359810 kubelet[3367]: W0430 00:38:45.359721 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.360060 kubelet[3367]: E0430 00:38:45.359741 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.360060 kubelet[3367]: E0430 00:38:45.360016 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.360135 kubelet[3367]: W0430 00:38:45.360067 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.360135 kubelet[3367]: E0430 00:38:45.360087 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.360397 kubelet[3367]: E0430 00:38:45.360380 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.360397 kubelet[3367]: W0430 00:38:45.360393 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.360585 kubelet[3367]: E0430 00:38:45.360409 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.361182 kubelet[3367]: E0430 00:38:45.361067 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.361182 kubelet[3367]: W0430 00:38:45.361091 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.361182 kubelet[3367]: E0430 00:38:45.361127 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.363517 kubelet[3367]: E0430 00:38:45.363315 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.363517 kubelet[3367]: W0430 00:38:45.363330 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.363956 kubelet[3367]: E0430 00:38:45.363756 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.364262 kubelet[3367]: E0430 00:38:45.364062 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.364262 kubelet[3367]: W0430 00:38:45.364074 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.365710 kubelet[3367]: E0430 00:38:45.365441 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.365710 kubelet[3367]: W0430 00:38:45.365455 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.366002 kubelet[3367]: E0430 00:38:45.365840 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.366002 kubelet[3367]: E0430 00:38:45.365899 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.367587 kubelet[3367]: E0430 00:38:45.367409 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.367587 kubelet[3367]: W0430 00:38:45.367423 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.367587 kubelet[3367]: E0430 00:38:45.367440 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.367796 kubelet[3367]: E0430 00:38:45.367760 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.367796 kubelet[3367]: W0430 00:38:45.367772 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.368598 kubelet[3367]: E0430 00:38:45.368474 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.369057 kubelet[3367]: E0430 00:38:45.368927 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.369057 kubelet[3367]: W0430 00:38:45.368939 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.369272 kubelet[3367]: E0430 00:38:45.369157 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.370683 kubelet[3367]: E0430 00:38:45.370613 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.370683 kubelet[3367]: W0430 00:38:45.370627 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.370872 kubelet[3367]: E0430 00:38:45.370796 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.371056 kubelet[3367]: E0430 00:38:45.371044 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.371199 kubelet[3367]: W0430 00:38:45.371121 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.371199 kubelet[3367]: E0430 00:38:45.371172 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.372529 kubelet[3367]: E0430 00:38:45.372468 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.372529 kubelet[3367]: W0430 00:38:45.372483 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.372734 kubelet[3367]: E0430 00:38:45.372705 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.372941 kubelet[3367]: E0430 00:38:45.372804 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.372941 kubelet[3367]: W0430 00:38:45.372813 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.373025 kubelet[3367]: E0430 00:38:45.373012 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.373624 kubelet[3367]: E0430 00:38:45.373481 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.373624 kubelet[3367]: W0430 00:38:45.373495 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.373897 kubelet[3367]: E0430 00:38:45.373796 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.375484 kubelet[3367]: E0430 00:38:45.374838 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.375484 kubelet[3367]: W0430 00:38:45.374851 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.376157 kubelet[3367]: E0430 00:38:45.375835 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.378198 kubelet[3367]: E0430 00:38:45.378067 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.378198 kubelet[3367]: W0430 00:38:45.378082 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.378546 kubelet[3367]: E0430 00:38:45.378323 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.381666 kubelet[3367]: E0430 00:38:45.381410 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.381666 kubelet[3367]: W0430 00:38:45.381426 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.382092 kubelet[3367]: E0430 00:38:45.382075 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.384153 kubelet[3367]: E0430 00:38:45.382665 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.384153 kubelet[3367]: W0430 00:38:45.382681 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.385383 kubelet[3367]: E0430 00:38:45.384598 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.385638 kubelet[3367]: E0430 00:38:45.385625 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.386852 kubelet[3367]: W0430 00:38:45.386830 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.387370 kubelet[3367]: E0430 00:38:45.387346 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.387465 kubelet[3367]: W0430 00:38:45.387453 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.387988 kubelet[3367]: E0430 00:38:45.387974 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.388065 kubelet[3367]: W0430 00:38:45.388055 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.388699 kubelet[3367]: E0430 00:38:45.388257 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.388699 kubelet[3367]: E0430 00:38:45.388290 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.388699 kubelet[3367]: E0430 00:38:45.388300 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.389323 kubelet[3367]: E0430 00:38:45.389308 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.389461 kubelet[3367]: W0430 00:38:45.389408 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.389461 kubelet[3367]: E0430 00:38:45.389429 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.403477 kubelet[3367]: E0430 00:38:45.403456 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:45.403664 kubelet[3367]: W0430 00:38:45.403580 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:45.403664 kubelet[3367]: E0430 00:38:45.403603 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:45.406530 containerd[1816]: time="2025-04-30T00:38:45.406418958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v8xnm,Uid:facb84c5-8830-48c5-8675-03a9219d0eb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\"" Apr 30 00:38:46.578223 kubelet[3367]: E0430 00:38:46.578155 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttw8" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" Apr 30 00:38:46.951516 containerd[1816]: time="2025-04-30T00:38:46.951408628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:46.955578 containerd[1816]: time="2025-04-30T00:38:46.955517336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:38:46.960754 containerd[1816]: time="2025-04-30T00:38:46.960706760Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:46.966063 containerd[1816]: time="2025-04-30T00:38:46.966037023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:46.968130 containerd[1816]: time="2025-04-30T00:38:46.966875781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.656811248s" Apr 30 00:38:46.968130 containerd[1816]: time="2025-04-30T00:38:46.966908581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:38:46.972393 containerd[1816]: time="2025-04-30T00:38:46.970490490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:38:46.979805 containerd[1816]: time="2025-04-30T00:38:46.979770981Z" level=info msg="CreateContainer within sandbox \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:38:47.048311 containerd[1816]: time="2025-04-30T00:38:47.048276692Z" level=info msg="CreateContainer within sandbox \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\"" Apr 30 00:38:47.049498 containerd[1816]: time="2025-04-30T00:38:47.048920090Z" level=info msg="StartContainer for \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\"" Apr 30 00:38:47.112156 containerd[1816]: time="2025-04-30T00:38:47.112033456Z" level=info msg="StartContainer for \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\" returns successfully" Apr 30 00:38:47.717682 kubelet[3367]: I0430 00:38:47.717091 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84846cfc59-27fhj" podStartSLOduration=2.058435682 podStartE2EDuration="3.717077284s" podCreationTimestamp="2025-04-30 00:38:44 +0000 UTC" firstStartedPulling="2025-04-30 00:38:45.308957177 +0000 UTC m=+26.820230134" lastFinishedPulling="2025-04-30 00:38:46.967598739 +0000 UTC m=+28.478871736" observedRunningTime="2025-04-30 00:38:47.716334046 +0000 UTC m=+29.227607043" watchObservedRunningTime="2025-04-30 00:38:47.717077284 +0000 UTC m=+29.228350241" Apr 30 00:38:47.764524 kubelet[3367]: E0430 00:38:47.764354 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.764524 kubelet[3367]: W0430 00:38:47.764409 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.764524 kubelet[3367]: E0430 00:38:47.764428 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.765164 kubelet[3367]: E0430 00:38:47.764610 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.765164 kubelet[3367]: W0430 00:38:47.764618 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.765164 kubelet[3367]: E0430 00:38:47.764635 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.765164 kubelet[3367]: E0430 00:38:47.764815 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.765164 kubelet[3367]: W0430 00:38:47.764824 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.765164 kubelet[3367]: E0430 00:38:47.764834 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.765164 kubelet[3367]: E0430 00:38:47.765007 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.765164 kubelet[3367]: W0430 00:38:47.765016 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.765164 kubelet[3367]: E0430 00:38:47.765026 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.766939 kubelet[3367]: E0430 00:38:47.766474 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.766939 kubelet[3367]: W0430 00:38:47.766486 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.766939 kubelet[3367]: E0430 00:38:47.766497 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.767824 kubelet[3367]: E0430 00:38:47.767706 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.767824 kubelet[3367]: W0430 00:38:47.767719 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.767824 kubelet[3367]: E0430 00:38:47.767730 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.768479 kubelet[3367]: E0430 00:38:47.768151 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.768479 kubelet[3367]: W0430 00:38:47.768164 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.768479 kubelet[3367]: E0430 00:38:47.768175 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.768893 kubelet[3367]: E0430 00:38:47.768789 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.768893 kubelet[3367]: W0430 00:38:47.768800 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.768893 kubelet[3367]: E0430 00:38:47.768811 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.769206 kubelet[3367]: E0430 00:38:47.769098 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.769206 kubelet[3367]: W0430 00:38:47.769109 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.769206 kubelet[3367]: E0430 00:38:47.769119 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.769430 kubelet[3367]: E0430 00:38:47.769351 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.769430 kubelet[3367]: W0430 00:38:47.769386 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.769430 kubelet[3367]: E0430 00:38:47.769398 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.769737 kubelet[3367]: E0430 00:38:47.769671 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.769737 kubelet[3367]: W0430 00:38:47.769688 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.769737 kubelet[3367]: E0430 00:38:47.769698 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.770042 kubelet[3367]: E0430 00:38:47.769953 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.770042 kubelet[3367]: W0430 00:38:47.769965 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.770042 kubelet[3367]: E0430 00:38:47.769974 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.770341 kubelet[3367]: E0430 00:38:47.770274 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.770341 kubelet[3367]: W0430 00:38:47.770291 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.770341 kubelet[3367]: E0430 00:38:47.770302 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.770677 kubelet[3367]: E0430 00:38:47.770620 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.770677 kubelet[3367]: W0430 00:38:47.770631 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.770677 kubelet[3367]: E0430 00:38:47.770642 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.770987 kubelet[3367]: E0430 00:38:47.770916 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.770987 kubelet[3367]: W0430 00:38:47.770926 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.770987 kubelet[3367]: E0430 00:38:47.770943 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.784403 kubelet[3367]: E0430 00:38:47.784317 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.784403 kubelet[3367]: W0430 00:38:47.784331 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.784403 kubelet[3367]: E0430 00:38:47.784355 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.784739 kubelet[3367]: E0430 00:38:47.784579 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.784739 kubelet[3367]: W0430 00:38:47.784589 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.784739 kubelet[3367]: E0430 00:38:47.784598 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.784739 kubelet[3367]: E0430 00:38:47.784740 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.784739 kubelet[3367]: W0430 00:38:47.784747 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.784739 kubelet[3367]: E0430 00:38:47.784756 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.785085 kubelet[3367]: E0430 00:38:47.785067 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.785144 kubelet[3367]: W0430 00:38:47.785133 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.785286 kubelet[3367]: E0430 00:38:47.785202 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.785406 kubelet[3367]: E0430 00:38:47.785394 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.785500 kubelet[3367]: W0430 00:38:47.785489 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.785568 kubelet[3367]: E0430 00:38:47.785558 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.785899 kubelet[3367]: E0430 00:38:47.785777 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.785899 kubelet[3367]: W0430 00:38:47.785789 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.785899 kubelet[3367]: E0430 00:38:47.785806 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.786067 kubelet[3367]: E0430 00:38:47.786055 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.786123 kubelet[3367]: W0430 00:38:47.786113 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.786232 kubelet[3367]: E0430 00:38:47.786210 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.786498 kubelet[3367]: E0430 00:38:47.786484 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790403 kubelet[3367]: W0430 00:38:47.786565 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790403 kubelet[3367]: E0430 00:38:47.786599 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790403 kubelet[3367]: E0430 00:38:47.786753 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790403 kubelet[3367]: W0430 00:38:47.786762 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790403 kubelet[3367]: E0430 00:38:47.786785 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790403 kubelet[3367]: E0430 00:38:47.786922 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790403 kubelet[3367]: W0430 00:38:47.786929 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790403 kubelet[3367]: E0430 00:38:47.786945 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790403 kubelet[3367]: E0430 00:38:47.787107 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790403 kubelet[3367]: W0430 00:38:47.787115 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790719 kubelet[3367]: E0430 00:38:47.787129 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790719 kubelet[3367]: E0430 00:38:47.787286 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790719 kubelet[3367]: W0430 00:38:47.787295 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790719 kubelet[3367]: E0430 00:38:47.787311 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790719 kubelet[3367]: E0430 00:38:47.787509 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790719 kubelet[3367]: W0430 00:38:47.787518 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790719 kubelet[3367]: E0430 00:38:47.787534 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790719 kubelet[3367]: E0430 00:38:47.787756 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790719 kubelet[3367]: W0430 00:38:47.787766 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790719 kubelet[3367]: E0430 00:38:47.787783 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790909 kubelet[3367]: E0430 00:38:47.787966 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790909 kubelet[3367]: W0430 00:38:47.787976 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790909 kubelet[3367]: E0430 00:38:47.787990 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790909 kubelet[3367]: E0430 00:38:47.788160 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790909 kubelet[3367]: W0430 00:38:47.788171 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790909 kubelet[3367]: E0430 00:38:47.788189 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790909 kubelet[3367]: E0430 00:38:47.788352 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.790909 kubelet[3367]: W0430 00:38:47.788360 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.790909 kubelet[3367]: E0430 00:38:47.788388 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:47.790909 kubelet[3367]: E0430 00:38:47.788746 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:47.791141 kubelet[3367]: W0430 00:38:47.788764 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:47.791141 kubelet[3367]: E0430 00:38:47.788774 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:48.226232 containerd[1816]: time="2025-04-30T00:38:48.225527087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:48.230240 containerd[1816]: time="2025-04-30T00:38:48.230205553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:38:48.235200 containerd[1816]: time="2025-04-30T00:38:48.235010058Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:48.240073 containerd[1816]: time="2025-04-30T00:38:48.240025643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:48.240847 containerd[1816]: time="2025-04-30T00:38:48.240567121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.270045631s" Apr 30 00:38:48.240847 containerd[1816]: time="2025-04-30T00:38:48.240599881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:38:48.243573 containerd[1816]: time="2025-04-30T00:38:48.243451233Z" level=info msg="CreateContainer within sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:38:48.282195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730590438.mount: Deactivated successfully. Apr 30 00:38:48.297898 containerd[1816]: time="2025-04-30T00:38:48.297863906Z" level=info msg="CreateContainer within sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\"" Apr 30 00:38:48.298641 containerd[1816]: time="2025-04-30T00:38:48.298561024Z" level=info msg="StartContainer for \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\"" Apr 30 00:38:48.352864 containerd[1816]: time="2025-04-30T00:38:48.352826218Z" level=info msg="StartContainer for \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\" returns successfully" Apr 30 00:38:48.379686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1-rootfs.mount: Deactivated successfully. Apr 30 00:38:49.209752 kubelet[3367]: E0430 00:38:48.577982 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttw8" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" Apr 30 00:38:49.209752 kubelet[3367]: I0430 00:38:48.706992 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:38:49.210148 containerd[1816]: time="2025-04-30T00:38:48.771154857Z" level=error msg="collecting metrics for 9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1" error="cgroups: cgroup deleted: unknown" Apr 30 00:38:49.276948 containerd[1816]: time="2025-04-30T00:38:49.276822549Z" level=info msg="shim disconnected" id=9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1 namespace=k8s.io Apr 30 00:38:49.276948 containerd[1816]: time="2025-04-30T00:38:49.276943748Z" level=warning msg="cleaning up after shim disconnected" id=9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1 namespace=k8s.io Apr 30 00:38:49.276948 containerd[1816]: time="2025-04-30T00:38:49.276955148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:38:49.710868 containerd[1816]: time="2025-04-30T00:38:49.710819060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:38:50.577889 kubelet[3367]: E0430 00:38:50.577571 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttw8" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" Apr 30 00:38:52.530403 containerd[1816]: time="2025-04-30T00:38:52.530079667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:52.533126 containerd[1816]: time="2025-04-30T00:38:52.533094939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:38:52.537856 containerd[1816]: time="2025-04-30T00:38:52.537819045Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:52.543748 containerd[1816]: time="2025-04-30T00:38:52.543694709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:52.544502 containerd[1816]: time="2025-04-30T00:38:52.544321187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.833456527s" Apr 30 00:38:52.544502 containerd[1816]: time="2025-04-30T00:38:52.544377227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:38:52.547621 containerd[1816]: time="2025-04-30T00:38:52.547450699Z" level=info msg="CreateContainer within sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:38:52.578892 kubelet[3367]: E0430 00:38:52.578022 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tttw8" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" Apr 30 00:38:52.604473 containerd[1816]: time="2025-04-30T00:38:52.604396301Z" level=info msg="CreateContainer within sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\"" Apr 30 00:38:52.606241 containerd[1816]: time="2025-04-30T00:38:52.604916699Z" level=info msg="StartContainer for \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\"" Apr 30 00:38:52.660193 containerd[1816]: time="2025-04-30T00:38:52.660069826Z" level=info msg="StartContainer for \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\" returns successfully" Apr 30 00:38:53.752581 containerd[1816]: time="2025-04-30T00:38:53.752531358Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:38:53.769268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae-rootfs.mount: Deactivated successfully. Apr 30 00:38:53.808579 kubelet[3367]: I0430 00:38:53.807895 3367 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:38:53.844181 kubelet[3367]: I0430 00:38:53.844114 3367 topology_manager.go:215] "Topology Admit Handler" podUID="2e0ed15b-c5c4-4942-8821-6ea97e00435b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2q77n" Apr 30 00:38:53.854943 kubelet[3367]: I0430 00:38:53.854379 3367 topology_manager.go:215] "Topology Admit Handler" podUID="6faae125-ceaa-469a-865a-339ba3fb0fe3" podNamespace="calico-system" podName="calico-kube-controllers-588cb9568f-qkvlw" Apr 30 00:38:53.854943 kubelet[3367]: I0430 00:38:53.854529 3367 topology_manager.go:215] "Topology Admit Handler" podUID="b9f44dcb-0a4c-49fb-98eb-8a8269048d60" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bmpmw" Apr 30 00:38:53.861747 kubelet[3367]: I0430 00:38:53.861639 3367 topology_manager.go:215] "Topology Admit Handler" podUID="9a6c46b4-3687-469f-a6e7-86919697db2e" podNamespace="calico-apiserver" podName="calico-apiserver-7688785779-95fgn" Apr 30 00:38:53.862674 kubelet[3367]: I0430 00:38:53.862341 3367 topology_manager.go:215] "Topology Admit Handler" podUID="90111c95-ac27-4e82-acee-f83ab191ee0f" podNamespace="calico-apiserver" podName="calico-apiserver-5db4fc9d49-vxdt7" Apr 30 00:38:53.868033 kubelet[3367]: I0430 00:38:53.867926 3367 topology_manager.go:215] "Topology Admit Handler" podUID="8524bcc8-ef18-4787-b5da-973e0f7abb0b" podNamespace="calico-apiserver" podName="calico-apiserver-7688785779-kmccd" Apr 30 00:38:53.924168 kubelet[3367]: I0430 00:38:53.924119 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr88b\" (UniqueName: \"kubernetes.io/projected/90111c95-ac27-4e82-acee-f83ab191ee0f-kube-api-access-tr88b\") pod \"calico-apiserver-5db4fc9d49-vxdt7\" (UID: \"90111c95-ac27-4e82-acee-f83ab191ee0f\") " pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" Apr 30 00:38:53.924168 kubelet[3367]: I0430 00:38:53.924168 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrcf\" (UniqueName: \"kubernetes.io/projected/8524bcc8-ef18-4787-b5da-973e0f7abb0b-kube-api-access-xhrcf\") pod \"calico-apiserver-7688785779-kmccd\" (UID: \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\") " pod="calico-apiserver/calico-apiserver-7688785779-kmccd" Apr 30 00:38:53.924665 kubelet[3367]: I0430 00:38:53.924195 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twfh\" (UniqueName: \"kubernetes.io/projected/9a6c46b4-3687-469f-a6e7-86919697db2e-kube-api-access-9twfh\") pod \"calico-apiserver-7688785779-95fgn\" (UID: \"9a6c46b4-3687-469f-a6e7-86919697db2e\") " pod="calico-apiserver/calico-apiserver-7688785779-95fgn" Apr 30 00:38:53.924665 kubelet[3367]: I0430 00:38:53.924211 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90111c95-ac27-4e82-acee-f83ab191ee0f-calico-apiserver-certs\") pod \"calico-apiserver-5db4fc9d49-vxdt7\" (UID: \"90111c95-ac27-4e82-acee-f83ab191ee0f\") " pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" Apr 30 00:38:53.924665 kubelet[3367]: I0430 00:38:53.924230 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6faae125-ceaa-469a-865a-339ba3fb0fe3-tigera-ca-bundle\") pod \"calico-kube-controllers-588cb9568f-qkvlw\" (UID: \"6faae125-ceaa-469a-865a-339ba3fb0fe3\") " pod="calico-system/calico-kube-controllers-588cb9568f-qkvlw" Apr 30 00:38:53.924665 kubelet[3367]: I0430 00:38:53.924249 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5pr4\" (UniqueName: \"kubernetes.io/projected/b9f44dcb-0a4c-49fb-98eb-8a8269048d60-kube-api-access-f5pr4\") pod \"coredns-7db6d8ff4d-bmpmw\" (UID: \"b9f44dcb-0a4c-49fb-98eb-8a8269048d60\") " pod="kube-system/coredns-7db6d8ff4d-bmpmw" Apr 30 00:38:53.924665 kubelet[3367]: I0430 00:38:53.924265 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w788s\" (UniqueName: \"kubernetes.io/projected/6faae125-ceaa-469a-865a-339ba3fb0fe3-kube-api-access-w788s\") pod \"calico-kube-controllers-588cb9568f-qkvlw\" (UID: \"6faae125-ceaa-469a-865a-339ba3fb0fe3\") " pod="calico-system/calico-kube-controllers-588cb9568f-qkvlw" Apr 30 00:38:53.924788 kubelet[3367]: I0430 00:38:53.924281 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zx45\" (UniqueName: \"kubernetes.io/projected/2e0ed15b-c5c4-4942-8821-6ea97e00435b-kube-api-access-7zx45\") pod \"coredns-7db6d8ff4d-2q77n\" (UID: \"2e0ed15b-c5c4-4942-8821-6ea97e00435b\") " pod="kube-system/coredns-7db6d8ff4d-2q77n" Apr 30 00:38:53.924788 kubelet[3367]: I0430 00:38:53.924328 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8524bcc8-ef18-4787-b5da-973e0f7abb0b-calico-apiserver-certs\") pod \"calico-apiserver-7688785779-kmccd\" (UID: \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\") " pod="calico-apiserver/calico-apiserver-7688785779-kmccd" Apr 30 00:38:53.924788 kubelet[3367]: I0430 00:38:53.924349 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9f44dcb-0a4c-49fb-98eb-8a8269048d60-config-volume\") pod \"coredns-7db6d8ff4d-bmpmw\" (UID: \"b9f44dcb-0a4c-49fb-98eb-8a8269048d60\") " pod="kube-system/coredns-7db6d8ff4d-bmpmw" Apr 30 00:38:53.924788 kubelet[3367]: I0430 00:38:53.924387 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e0ed15b-c5c4-4942-8821-6ea97e00435b-config-volume\") pod \"coredns-7db6d8ff4d-2q77n\" (UID: \"2e0ed15b-c5c4-4942-8821-6ea97e00435b\") " pod="kube-system/coredns-7db6d8ff4d-2q77n" Apr 30 00:38:53.924788 kubelet[3367]: I0430 00:38:53.924407 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a6c46b4-3687-469f-a6e7-86919697db2e-calico-apiserver-certs\") pod \"calico-apiserver-7688785779-95fgn\" (UID: \"9a6c46b4-3687-469f-a6e7-86919697db2e\") " pod="calico-apiserver/calico-apiserver-7688785779-95fgn" Apr 30 00:38:54.913101 containerd[1816]: time="2025-04-30T00:38:54.912646701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2q77n,Uid:2e0ed15b-c5c4-4942-8821-6ea97e00435b,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:54.913101 containerd[1816]: time="2025-04-30T00:38:54.912819221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588cb9568f-qkvlw,Uid:6faae125-ceaa-469a-865a-339ba3fb0fe3,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:54.914470 containerd[1816]: time="2025-04-30T00:38:54.913937418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-kmccd,Uid:8524bcc8-ef18-4787-b5da-973e0f7abb0b,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:38:54.914720 containerd[1816]: time="2025-04-30T00:38:54.914701535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bmpmw,Uid:b9f44dcb-0a4c-49fb-98eb-8a8269048d60,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:54.914909 containerd[1816]: time="2025-04-30T00:38:54.914892495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttw8,Uid:eb05636f-1a4f-4911-8c79-8727fa131132,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:54.915287 containerd[1816]: time="2025-04-30T00:38:54.915192654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db4fc9d49-vxdt7,Uid:90111c95-ac27-4e82-acee-f83ab191ee0f,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:38:54.915504 containerd[1816]: time="2025-04-30T00:38:54.915264294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-95fgn,Uid:9a6c46b4-3687-469f-a6e7-86919697db2e,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:38:54.958392 containerd[1816]: time="2025-04-30T00:38:54.958220575Z" level=info msg="shim disconnected" id=a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae namespace=k8s.io Apr 30 00:38:54.958392 containerd[1816]: time="2025-04-30T00:38:54.958354774Z" level=warning msg="cleaning up after shim disconnected" id=a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae namespace=k8s.io Apr 30 00:38:54.958392 containerd[1816]: time="2025-04-30T00:38:54.958391214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:38:55.173035 containerd[1816]: time="2025-04-30T00:38:55.172631340Z" level=error msg="Failed to destroy network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.173035 containerd[1816]: time="2025-04-30T00:38:55.172898860Z" level=error msg="encountered an error cleaning up failed sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.173035 containerd[1816]: time="2025-04-30T00:38:55.172936779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2q77n,Uid:2e0ed15b-c5c4-4942-8821-6ea97e00435b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.173861 kubelet[3367]: E0430 00:38:55.173461 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.174182 kubelet[3367]: E0430 00:38:55.174034 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2q77n" Apr 30 00:38:55.174182 kubelet[3367]: E0430 00:38:55.174083 3367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2q77n" Apr 30 00:38:55.174182 kubelet[3367]: E0430 00:38:55.174143 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2q77n_kube-system(2e0ed15b-c5c4-4942-8821-6ea97e00435b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2q77n_kube-system(2e0ed15b-c5c4-4942-8821-6ea97e00435b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2q77n" podUID="2e0ed15b-c5c4-4942-8821-6ea97e00435b" Apr 30 00:38:55.280834 containerd[1816]: time="2025-04-30T00:38:55.280784720Z" level=error msg="Failed to destroy network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.281559 containerd[1816]: time="2025-04-30T00:38:55.281337599Z" level=error msg="encountered an error cleaning up failed sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.281718 containerd[1816]: time="2025-04-30T00:38:55.281685038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588cb9568f-qkvlw,Uid:6faae125-ceaa-469a-865a-339ba3fb0fe3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.282444 kubelet[3367]: E0430 00:38:55.281991 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.282444 kubelet[3367]: E0430 00:38:55.282047 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-588cb9568f-qkvlw" Apr 30 00:38:55.282444 kubelet[3367]: E0430 00:38:55.282065 3367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-588cb9568f-qkvlw" Apr 30 00:38:55.282583 kubelet[3367]: E0430 00:38:55.282107 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-588cb9568f-qkvlw_calico-system(6faae125-ceaa-469a-865a-339ba3fb0fe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-588cb9568f-qkvlw_calico-system(6faae125-ceaa-469a-865a-339ba3fb0fe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-588cb9568f-qkvlw" podUID="6faae125-ceaa-469a-865a-339ba3fb0fe3" Apr 30 00:38:55.303433 containerd[1816]: time="2025-04-30T00:38:55.303341698Z" level=error msg="Failed to destroy network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.303715 containerd[1816]: time="2025-04-30T00:38:55.303682057Z" level=error msg="encountered an error cleaning up failed sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.303770 containerd[1816]: time="2025-04-30T00:38:55.303730377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bmpmw,Uid:b9f44dcb-0a4c-49fb-98eb-8a8269048d60,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.303942 kubelet[3367]: E0430 00:38:55.303901 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.303997 kubelet[3367]: E0430 00:38:55.303959 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bmpmw" Apr 30 00:38:55.303997 kubelet[3367]: E0430 00:38:55.303979 3367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bmpmw" Apr 30 00:38:55.304049 kubelet[3367]: E0430 00:38:55.304012 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bmpmw_kube-system(b9f44dcb-0a4c-49fb-98eb-8a8269048d60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bmpmw_kube-system(b9f44dcb-0a4c-49fb-98eb-8a8269048d60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bmpmw" podUID="b9f44dcb-0a4c-49fb-98eb-8a8269048d60" Apr 30 00:38:55.319381 containerd[1816]: time="2025-04-30T00:38:55.319136174Z" level=error msg="Failed to destroy network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.319742 containerd[1816]: time="2025-04-30T00:38:55.319498973Z" level=error msg="encountered an error cleaning up failed sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.319800 containerd[1816]: time="2025-04-30T00:38:55.319665933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-kmccd,Uid:8524bcc8-ef18-4787-b5da-973e0f7abb0b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.320198 kubelet[3367]: E0430 00:38:55.320041 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.320198 kubelet[3367]: E0430 00:38:55.320119 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688785779-kmccd" Apr 30 00:38:55.320198 kubelet[3367]: E0430 00:38:55.320138 3367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688785779-kmccd" Apr 30 00:38:55.320345 kubelet[3367]: E0430 00:38:55.320171 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688785779-kmccd_calico-apiserver(8524bcc8-ef18-4787-b5da-973e0f7abb0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688785779-kmccd_calico-apiserver(8524bcc8-ef18-4787-b5da-973e0f7abb0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688785779-kmccd" podUID="8524bcc8-ef18-4787-b5da-973e0f7abb0b" Apr 30 00:38:55.323257 containerd[1816]: time="2025-04-30T00:38:55.322715084Z" level=error msg="Failed to destroy network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.323257 containerd[1816]: time="2025-04-30T00:38:55.323035443Z" level=error msg="encountered an error cleaning up failed sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.323257 containerd[1816]: time="2025-04-30T00:38:55.323073483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db4fc9d49-vxdt7,Uid:90111c95-ac27-4e82-acee-f83ab191ee0f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.323511 containerd[1816]: time="2025-04-30T00:38:55.323485762Z" level=error msg="Failed to destroy network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.323728 kubelet[3367]: E0430 00:38:55.323684 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.323783 kubelet[3367]: E0430 00:38:55.323742 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" Apr 30 00:38:55.323783 kubelet[3367]: E0430 00:38:55.323768 3367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" Apr 30 00:38:55.323837 kubelet[3367]: E0430 00:38:55.323814 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5db4fc9d49-vxdt7_calico-apiserver(90111c95-ac27-4e82-acee-f83ab191ee0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5db4fc9d49-vxdt7_calico-apiserver(90111c95-ac27-4e82-acee-f83ab191ee0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" podUID="90111c95-ac27-4e82-acee-f83ab191ee0f" Apr 30 00:38:55.324075 containerd[1816]: time="2025-04-30T00:38:55.324047001Z" level=error msg="encountered an error cleaning up failed sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.324463 containerd[1816]: time="2025-04-30T00:38:55.324434359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttw8,Uid:eb05636f-1a4f-4911-8c79-8727fa131132,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.324780 kubelet[3367]: E0430 00:38:55.324668 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.324780 kubelet[3367]: E0430 00:38:55.324704 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tttw8" Apr 30 00:38:55.324780 kubelet[3367]: E0430 00:38:55.324720 3367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tttw8" Apr 30 00:38:55.324901 kubelet[3367]: E0430 00:38:55.324745 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tttw8_calico-system(eb05636f-1a4f-4911-8c79-8727fa131132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tttw8_calico-system(eb05636f-1a4f-4911-8c79-8727fa131132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tttw8" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" Apr 30 00:38:55.326788 containerd[1816]: time="2025-04-30T00:38:55.326751233Z" level=error msg="Failed to destroy network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.327155 containerd[1816]: time="2025-04-30T00:38:55.327102872Z" level=error msg="encountered an error cleaning up failed sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.327192 containerd[1816]: time="2025-04-30T00:38:55.327167712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-95fgn,Uid:9a6c46b4-3687-469f-a6e7-86919697db2e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.327349 kubelet[3367]: E0430 00:38:55.327320 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.327407 kubelet[3367]: E0430 00:38:55.327358 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688785779-95fgn" Apr 30 00:38:55.327407 kubelet[3367]: E0430 00:38:55.327385 3367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688785779-95fgn" Apr 30 00:38:55.327460 kubelet[3367]: E0430 00:38:55.327413 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688785779-95fgn_calico-apiserver(9a6c46b4-3687-469f-a6e7-86919697db2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688785779-95fgn_calico-apiserver(9a6c46b4-3687-469f-a6e7-86919697db2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688785779-95fgn" podUID="9a6c46b4-3687-469f-a6e7-86919697db2e" Apr 30 00:38:55.721545 kubelet[3367]: I0430 00:38:55.721520 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:38:55.723444 containerd[1816]: time="2025-04-30T00:38:55.723013654Z" level=info msg="StopPodSandbox for \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\"" Apr 30 00:38:55.723444 containerd[1816]: time="2025-04-30T00:38:55.723171494Z" level=info msg="Ensure that sandbox 2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2 in task-service has been cleanup successfully" Apr 30 00:38:55.730066 containerd[1816]: time="2025-04-30T00:38:55.730034515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:38:55.730943 kubelet[3367]: I0430 00:38:55.730874 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:38:55.731838 containerd[1816]: time="2025-04-30T00:38:55.731622071Z" level=info msg="StopPodSandbox for \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\"" Apr 30 00:38:55.731917 containerd[1816]: time="2025-04-30T00:38:55.731860110Z" level=info msg="Ensure that sandbox 66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672 in task-service has been cleanup successfully" Apr 30 00:38:55.735452 kubelet[3367]: I0430 00:38:55.735417 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:38:55.737215 containerd[1816]: time="2025-04-30T00:38:55.735820779Z" level=info msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\"" Apr 30 00:38:55.737215 containerd[1816]: time="2025-04-30T00:38:55.735956619Z" level=info msg="Ensure that sandbox be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260 in task-service has been cleanup successfully" Apr 30 00:38:55.743755 kubelet[3367]: I0430 00:38:55.743732 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:38:55.745086 containerd[1816]: time="2025-04-30T00:38:55.745047753Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" Apr 30 00:38:55.749143 containerd[1816]: time="2025-04-30T00:38:55.748923423Z" level=info msg="Ensure that sandbox 979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146 in task-service has been cleanup successfully" Apr 30 00:38:55.752460 kubelet[3367]: I0430 00:38:55.752187 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:38:55.753954 containerd[1816]: time="2025-04-30T00:38:55.753911569Z" level=info msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\"" Apr 30 00:38:55.754957 kubelet[3367]: I0430 00:38:55.754930 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:38:55.755461 containerd[1816]: time="2025-04-30T00:38:55.754886606Z" level=info msg="Ensure that sandbox bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c in task-service has been cleanup successfully" Apr 30 00:38:55.755729 containerd[1816]: time="2025-04-30T00:38:55.755687964Z" level=info msg="StopPodSandbox for \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\"" Apr 30 00:38:55.755888 containerd[1816]: time="2025-04-30T00:38:55.755819083Z" level=info msg="Ensure that sandbox 390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca in task-service has been cleanup successfully" Apr 30 00:38:55.762760 kubelet[3367]: I0430 00:38:55.762735 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:38:55.764033 containerd[1816]: time="2025-04-30T00:38:55.763696662Z" level=info msg="StopPodSandbox for \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\"" Apr 30 00:38:55.765206 containerd[1816]: time="2025-04-30T00:38:55.764945018Z" level=info msg="Ensure that sandbox 5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a in task-service has been cleanup successfully" Apr 30 00:38:55.773668 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2-shm.mount: Deactivated successfully. Apr 30 00:38:55.773814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260-shm.mount: Deactivated successfully. Apr 30 00:38:55.821925 containerd[1816]: time="2025-04-30T00:38:55.821755781Z" level=error msg="StopPodSandbox for \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\" failed" error="failed to destroy network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.821925 containerd[1816]: time="2025-04-30T00:38:55.821875660Z" level=error msg="StopPodSandbox for \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\" failed" error="failed to destroy network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.824702 kubelet[3367]: E0430 00:38:55.824659 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:38:55.824810 kubelet[3367]: E0430 00:38:55.824716 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a"} Apr 30 00:38:55.824810 kubelet[3367]: E0430 00:38:55.824769 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:55.824810 kubelet[3367]: E0430 00:38:55.824789 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688785779-kmccd" podUID="8524bcc8-ef18-4787-b5da-973e0f7abb0b" Apr 30 00:38:55.824810 kubelet[3367]: E0430 00:38:55.824659 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:38:55.824993 kubelet[3367]: E0430 00:38:55.824816 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2"} Apr 30 00:38:55.824993 kubelet[3367]: E0430 00:38:55.824833 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6faae125-ceaa-469a-865a-339ba3fb0fe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:55.824993 kubelet[3367]: E0430 00:38:55.824847 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6faae125-ceaa-469a-865a-339ba3fb0fe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-588cb9568f-qkvlw" podUID="6faae125-ceaa-469a-865a-339ba3fb0fe3" Apr 30 00:38:55.825322 containerd[1816]: time="2025-04-30T00:38:55.824967812Z" level=error msg="StopPodSandbox for \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\" failed" error="failed to destroy network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.825359 kubelet[3367]: E0430 00:38:55.825137 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:38:55.825359 kubelet[3367]: E0430 00:38:55.825174 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672"} Apr 30 00:38:55.825359 kubelet[3367]: E0430 00:38:55.825197 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb05636f-1a4f-4911-8c79-8727fa131132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:55.825359 kubelet[3367]: E0430 00:38:55.825214 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb05636f-1a4f-4911-8c79-8727fa131132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tttw8" podUID="eb05636f-1a4f-4911-8c79-8727fa131132" Apr 30 00:38:55.848801 containerd[1816]: time="2025-04-30T00:38:55.848477627Z" level=error msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" failed" error="failed to destroy network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.848925 kubelet[3367]: E0430 00:38:55.848689 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:38:55.848925 kubelet[3367]: E0430 00:38:55.848749 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146"} Apr 30 00:38:55.848925 kubelet[3367]: E0430 00:38:55.848779 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a6c46b4-3687-469f-a6e7-86919697db2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:55.849590 kubelet[3367]: E0430 00:38:55.849555 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a6c46b4-3687-469f-a6e7-86919697db2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688785779-95fgn" podUID="9a6c46b4-3687-469f-a6e7-86919697db2e" Apr 30 00:38:55.854349 containerd[1816]: time="2025-04-30T00:38:55.853766972Z" level=error msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" failed" error="failed to destroy network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.854651 kubelet[3367]: E0430 00:38:55.854493 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:38:55.854651 kubelet[3367]: E0430 00:38:55.854522 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260"} Apr 30 00:38:55.854651 kubelet[3367]: E0430 00:38:55.854548 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e0ed15b-c5c4-4942-8821-6ea97e00435b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:55.854651 kubelet[3367]: E0430 00:38:55.854565 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e0ed15b-c5c4-4942-8821-6ea97e00435b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2q77n" podUID="2e0ed15b-c5c4-4942-8821-6ea97e00435b" Apr 30 00:38:55.855575 containerd[1816]: time="2025-04-30T00:38:55.855451047Z" level=error msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" failed" error="failed to destroy network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.856225 kubelet[3367]: E0430 00:38:55.855615 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:38:55.856225 kubelet[3367]: E0430 00:38:55.855642 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c"} Apr 30 00:38:55.856225 kubelet[3367]: E0430 00:38:55.855663 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90111c95-ac27-4e82-acee-f83ab191ee0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:55.856225 kubelet[3367]: E0430 00:38:55.855680 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90111c95-ac27-4e82-acee-f83ab191ee0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" podUID="90111c95-ac27-4e82-acee-f83ab191ee0f" Apr 30 00:38:55.856489 containerd[1816]: time="2025-04-30T00:38:55.856463084Z" level=error msg="StopPodSandbox for \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\" failed" error="failed to destroy network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:55.856748 kubelet[3367]: E0430 00:38:55.856656 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:38:55.856748 kubelet[3367]: E0430 00:38:55.856688 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca"} Apr 30 00:38:55.856748 kubelet[3367]: E0430 00:38:55.856710 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9f44dcb-0a4c-49fb-98eb-8a8269048d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:55.856748 kubelet[3367]: E0430 00:38:55.856727 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9f44dcb-0a4c-49fb-98eb-8a8269048d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bmpmw" podUID="b9f44dcb-0a4c-49fb-98eb-8a8269048d60" Apr 30 00:39:04.172452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401147283.mount: Deactivated successfully. Apr 30 00:39:07.578626 containerd[1816]: time="2025-04-30T00:39:07.578093690Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" Apr 30 00:39:08.095906 containerd[1816]: time="2025-04-30T00:39:07.579032207Z" level=info msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\"" Apr 30 00:39:08.095906 containerd[1816]: time="2025-04-30T00:39:07.581004601Z" level=info msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\"" Apr 30 00:39:08.095906 containerd[1816]: time="2025-04-30T00:39:07.896999682Z" level=error msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" failed" error="failed to destroy network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:39:08.095906 containerd[1816]: time="2025-04-30T00:39:07.897344921Z" level=error msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" failed" error="failed to destroy network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:39:08.095906 containerd[1816]: time="2025-04-30T00:39:07.897812960Z" level=error msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" failed" error="failed to destroy network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:39:08.096111 kubelet[3367]: E0430 00:39:07.897242 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:39:08.096111 kubelet[3367]: E0430 00:39:07.897482 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260"} Apr 30 00:39:08.096111 kubelet[3367]: E0430 00:39:07.897491 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:39:08.096111 kubelet[3367]: E0430 00:39:07.897529 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c"} Apr 30 00:39:08.096111 kubelet[3367]: E0430 00:39:07.898789 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90111c95-ac27-4e82-acee-f83ab191ee0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:39:08.096668 kubelet[3367]: E0430 00:39:07.898850 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:08.096668 kubelet[3367]: E0430 00:39:07.898890 3367 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146"} Apr 30 00:39:08.096668 kubelet[3367]: E0430 00:39:07.898911 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a6c46b4-3687-469f-a6e7-86919697db2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:39:08.096668 kubelet[3367]: E0430 00:39:07.899572 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a6c46b4-3687-469f-a6e7-86919697db2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688785779-95fgn" podUID="9a6c46b4-3687-469f-a6e7-86919697db2e" Apr 30 00:39:08.096787 kubelet[3367]: E0430 00:39:07.898790 3367 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e0ed15b-c5c4-4942-8821-6ea97e00435b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:39:08.096787 kubelet[3367]: E0430 00:39:07.899650 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e0ed15b-c5c4-4942-8821-6ea97e00435b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2q77n" podUID="2e0ed15b-c5c4-4942-8821-6ea97e00435b" Apr 30 00:39:08.096787 kubelet[3367]: E0430 00:39:07.900399 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90111c95-ac27-4e82-acee-f83ab191ee0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" podUID="90111c95-ac27-4e82-acee-f83ab191ee0f" Apr 30 00:39:08.134217 containerd[1816]: time="2025-04-30T00:39:08.134166083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:08.136825 containerd[1816]: time="2025-04-30T00:39:08.136706755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:39:08.145191 containerd[1816]: time="2025-04-30T00:39:08.145135810Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:08.149018 containerd[1816]: time="2025-04-30T00:39:08.148967318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:08.150049 containerd[1816]: time="2025-04-30T00:39:08.149681396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 12.419611841s" Apr 30 00:39:08.150049 containerd[1816]: time="2025-04-30T00:39:08.149715276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:39:08.163930 containerd[1816]: time="2025-04-30T00:39:08.163884313Z" level=info msg="CreateContainer within sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:39:08.216470 containerd[1816]: time="2025-04-30T00:39:08.216425833Z" level=info msg="CreateContainer within sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\"" Apr 30 00:39:08.217783 containerd[1816]: time="2025-04-30T00:39:08.217639070Z" level=info msg="StartContainer for \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\"" Apr 30 00:39:08.271657 containerd[1816]: time="2025-04-30T00:39:08.271571186Z" level=info msg="StartContainer for \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\" returns successfully" Apr 30 00:39:08.457522 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:39:08.457701 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:39:10.579887 containerd[1816]: time="2025-04-30T00:39:10.578976548Z" level=info msg="StopPodSandbox for \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\"" Apr 30 00:39:10.579887 containerd[1816]: time="2025-04-30T00:39:10.579759465Z" level=info msg="StopPodSandbox for \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\"" Apr 30 00:39:10.582086 containerd[1816]: time="2025-04-30T00:39:10.581799219Z" level=info msg="StopPodSandbox for \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\"" Apr 30 00:39:10.584344 containerd[1816]: time="2025-04-30T00:39:10.583915933Z" level=info msg="StopPodSandbox for \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\"" Apr 30 00:39:10.680201 kubelet[3367]: I0430 00:39:10.679572 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v8xnm" podStartSLOduration=3.937282363 podStartE2EDuration="26.679557323s" podCreationTimestamp="2025-04-30 00:38:44 +0000 UTC" firstStartedPulling="2025-04-30 00:38:45.407991034 +0000 UTC m=+26.919263991" lastFinishedPulling="2025-04-30 00:39:08.150265954 +0000 UTC m=+49.661538951" observedRunningTime="2025-04-30 00:39:08.816508893 +0000 UTC m=+50.327781890" watchObservedRunningTime="2025-04-30 00:39:10.679557323 +0000 UTC m=+52.190830320" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.680 [INFO][4748] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.681 [INFO][4748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" iface="eth0" netns="/var/run/netns/cni-4d32986c-8a41-3e78-6a9b-cc57cee69283" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.681 [INFO][4748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" iface="eth0" netns="/var/run/netns/cni-4d32986c-8a41-3e78-6a9b-cc57cee69283" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.681 [INFO][4748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" iface="eth0" netns="/var/run/netns/cni-4d32986c-8a41-3e78-6a9b-cc57cee69283" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.681 [INFO][4748] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.681 [INFO][4748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.728 [INFO][4767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.729 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.729 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.743 [WARNING][4767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.743 [INFO][4767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.744 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:10.748417 containerd[1816]: 2025-04-30 00:39:10.747 [INFO][4748] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:10.750172 containerd[1816]: time="2025-04-30T00:39:10.750124189Z" level=info msg="TearDown network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\" successfully" Apr 30 00:39:10.750172 containerd[1816]: time="2025-04-30T00:39:10.750165708Z" level=info msg="StopPodSandbox for \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\" returns successfully" Apr 30 00:39:10.751665 systemd[1]: run-netns-cni\x2d4d32986c\x2d8a41\x2d3e78\x2d6a9b\x2dcc57cee69283.mount: Deactivated successfully. Apr 30 00:39:10.753505 containerd[1816]: time="2025-04-30T00:39:10.752919860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bmpmw,Uid:b9f44dcb-0a4c-49fb-98eb-8a8269048d60,Namespace:kube-system,Attempt:1,}" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.689 [INFO][4734] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.689 [INFO][4734] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" iface="eth0" netns="/var/run/netns/cni-21a8625a-b082-07cb-e71f-a1e7906ab84c" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.690 [INFO][4734] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" iface="eth0" netns="/var/run/netns/cni-21a8625a-b082-07cb-e71f-a1e7906ab84c" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.690 [INFO][4734] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" iface="eth0" netns="/var/run/netns/cni-21a8625a-b082-07cb-e71f-a1e7906ab84c" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.690 [INFO][4734] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.690 [INFO][4734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.738 [INFO][4772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.738 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.744 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.757 [WARNING][4772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.757 [INFO][4772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.758 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:10.762902 containerd[1816]: 2025-04-30 00:39:10.761 [INFO][4734] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:10.765796 containerd[1816]: time="2025-04-30T00:39:10.765409662Z" level=info msg="TearDown network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\" successfully" Apr 30 00:39:10.765796 containerd[1816]: time="2025-04-30T00:39:10.765433982Z" level=info msg="StopPodSandbox for \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\" returns successfully" Apr 30 00:39:10.766237 containerd[1816]: time="2025-04-30T00:39:10.766056580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-kmccd,Uid:8524bcc8-ef18-4787-b5da-973e0f7abb0b,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:39:10.769273 systemd[1]: run-netns-cni\x2d21a8625a\x2db082\x2d07cb\x2de71f\x2da1e7906ab84c.mount: Deactivated successfully. Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.688 [INFO][4743] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.688 [INFO][4743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" iface="eth0" netns="/var/run/netns/cni-8087ace9-c2e1-5947-9e7e-5ca89d7b7d40" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.694 [INFO][4743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" iface="eth0" netns="/var/run/netns/cni-8087ace9-c2e1-5947-9e7e-5ca89d7b7d40" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.694 [INFO][4743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" iface="eth0" netns="/var/run/netns/cni-8087ace9-c2e1-5947-9e7e-5ca89d7b7d40" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.694 [INFO][4743] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.694 [INFO][4743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.740 [INFO][4776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.741 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.758 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.772 [WARNING][4776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.772 [INFO][4776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.773 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:10.780535 containerd[1816]: 2025-04-30 00:39:10.775 [INFO][4743] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:10.781000 containerd[1816]: time="2025-04-30T00:39:10.780628576Z" level=info msg="TearDown network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\" successfully" Apr 30 00:39:10.781000 containerd[1816]: time="2025-04-30T00:39:10.780647416Z" level=info msg="StopPodSandbox for \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\" returns successfully" Apr 30 00:39:10.782910 containerd[1816]: time="2025-04-30T00:39:10.782510410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttw8,Uid:eb05636f-1a4f-4911-8c79-8727fa131132,Namespace:calico-system,Attempt:1,}" Apr 30 00:39:10.784290 systemd[1]: run-netns-cni\x2d8087ace9\x2dc2e1\x2d5947\x2d9e7e\x2d5ca89d7b7d40.mount: Deactivated successfully. Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.702 [INFO][4739] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.702 [INFO][4739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" iface="eth0" netns="/var/run/netns/cni-fcd31fa8-90e4-658f-610c-0defe1e2f7d7" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.703 [INFO][4739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" iface="eth0" netns="/var/run/netns/cni-fcd31fa8-90e4-658f-610c-0defe1e2f7d7" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.704 [INFO][4739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" iface="eth0" netns="/var/run/netns/cni-fcd31fa8-90e4-658f-610c-0defe1e2f7d7" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.704 [INFO][4739] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.704 [INFO][4739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.744 [INFO][4784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.744 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.773 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.785 [WARNING][4784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.785 [INFO][4784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.787 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:10.789776 containerd[1816]: 2025-04-30 00:39:10.788 [INFO][4739] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:10.790139 containerd[1816]: time="2025-04-30T00:39:10.790070467Z" level=info msg="TearDown network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\" successfully" Apr 30 00:39:10.790139 containerd[1816]: time="2025-04-30T00:39:10.790089507Z" level=info msg="StopPodSandbox for \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\" returns successfully" Apr 30 00:39:10.792594 systemd[1]: run-netns-cni\x2dfcd31fa8\x2d90e4\x2d658f\x2d610c\x2d0defe1e2f7d7.mount: Deactivated successfully. Apr 30 00:39:10.793106 containerd[1816]: time="2025-04-30T00:39:10.793085258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588cb9568f-qkvlw,Uid:6faae125-ceaa-469a-865a-339ba3fb0fe3,Namespace:calico-system,Attempt:1,}" Apr 30 00:39:13.914626 kubelet[3367]: I0430 00:39:13.914273 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:39:16.032015 kernel: bpftool[4907]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:39:16.344770 systemd-networkd[1368]: cali1825527804f: Link UP Apr 30 00:39:16.348113 systemd-networkd[1368]: cali1825527804f: Gained carrier Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.138 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0 coredns-7db6d8ff4d- kube-system b9f44dcb-0a4c-49fb-98eb-8a8269048d60 774 0 2025-04-30 00:38:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 coredns-7db6d8ff4d-bmpmw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1825527804f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.139 [INFO][4881] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.233 [INFO][4935] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" HandleID="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.251 [INFO][4935] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" HandleID="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dcd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-2b660cb835", "pod":"coredns-7db6d8ff4d-bmpmw", "timestamp":"2025-04-30 00:39:16.233270091 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.251 [INFO][4935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.251 [INFO][4935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.251 [INFO][4935] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.255 [INFO][4935] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.261 [INFO][4935] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.268 [INFO][4935] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.271 [INFO][4935] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.277 [INFO][4935] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.277 [INFO][4935] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.281 [INFO][4935] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26 Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.291 [INFO][4935] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.309 [INFO][4935] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.1/26] block=192.168.1.0/26 handle="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.309 [INFO][4935] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.1/26] handle="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.309 [INFO][4935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:16.384735 containerd[1816]: 2025-04-30 00:39:16.309 [INFO][4935] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.1/26] IPv6=[] ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" HandleID="k8s-pod-network.2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:16.385503 containerd[1816]: 2025-04-30 00:39:16.317 [INFO][4881] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9f44dcb-0a4c-49fb-98eb-8a8269048d60", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"coredns-7db6d8ff4d-bmpmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1825527804f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.385503 containerd[1816]: 2025-04-30 00:39:16.317 [INFO][4881] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.1/32] ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:16.385503 containerd[1816]: 2025-04-30 00:39:16.317 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1825527804f ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:16.385503 containerd[1816]: 2025-04-30 00:39:16.348 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:16.385503 containerd[1816]: 2025-04-30 00:39:16.349 [INFO][4881] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9f44dcb-0a4c-49fb-98eb-8a8269048d60", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26", Pod:"coredns-7db6d8ff4d-bmpmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1825527804f", MAC:"96:ea:90:75:03:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.385503 containerd[1816]: 2025-04-30 00:39:16.370 [INFO][4881] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bmpmw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:16.391852 systemd-networkd[1368]: calif40acfe022b: Link UP Apr 30 00:39:16.394474 systemd-networkd[1368]: calif40acfe022b: Gained carrier Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.188 [INFO][4890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0 calico-apiserver-7688785779- calico-apiserver 8524bcc8-ef18-4787-b5da-973e0f7abb0b 776 0 2025-04-30 00:38:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7688785779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 calico-apiserver-7688785779-kmccd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif40acfe022b [] []}} ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.190 [INFO][4890] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.288 [INFO][4948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.312 [INFO][4948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031ab70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-2b660cb835", "pod":"calico-apiserver-7688785779-kmccd", "timestamp":"2025-04-30 00:39:16.288255208 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.312 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.312 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.312 [INFO][4948] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.314 [INFO][4948] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.326 [INFO][4948] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.336 [INFO][4948] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.338 [INFO][4948] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.342 [INFO][4948] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.343 [INFO][4948] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.347 [INFO][4948] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464 Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.355 [INFO][4948] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.372 [INFO][4948] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.2/26] block=192.168.1.0/26 handle="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.373 [INFO][4948] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.2/26] handle="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.373 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:16.432338 containerd[1816]: 2025-04-30 00:39:16.373 [INFO][4948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.2/26] IPv6=[] ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:16.432866 containerd[1816]: 2025-04-30 00:39:16.377 [INFO][4890] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0", GenerateName:"calico-apiserver-7688785779-", Namespace:"calico-apiserver", SelfLink:"", UID:"8524bcc8-ef18-4787-b5da-973e0f7abb0b", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688785779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"calico-apiserver-7688785779-kmccd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif40acfe022b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.432866 containerd[1816]: 2025-04-30 00:39:16.385 [INFO][4890] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.2/32] ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:16.432866 containerd[1816]: 2025-04-30 00:39:16.385 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif40acfe022b ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:16.432866 containerd[1816]: 2025-04-30 00:39:16.395 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:16.432866 containerd[1816]: 2025-04-30 00:39:16.398 [INFO][4890] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0", GenerateName:"calico-apiserver-7688785779-", Namespace:"calico-apiserver", SelfLink:"", UID:"8524bcc8-ef18-4787-b5da-973e0f7abb0b", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688785779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464", Pod:"calico-apiserver-7688785779-kmccd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif40acfe022b", MAC:"ce:d1:7b:04:de:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.432866 containerd[1816]: 2025-04-30 00:39:16.422 [INFO][4890] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-kmccd" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:16.463485 systemd-networkd[1368]: cali6ecd5eaa4f5: Link UP Apr 30 00:39:16.466071 systemd-networkd[1368]: cali6ecd5eaa4f5: Gained carrier Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.187 [INFO][4909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0 csi-node-driver- calico-system eb05636f-1a4f-4911-8c79-8727fa131132 775 0 2025-04-30 00:38:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 csi-node-driver-tttw8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6ecd5eaa4f5 [] []}} ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.188 [INFO][4909] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.286 [INFO][4952] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" HandleID="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.315 [INFO][4952] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" HandleID="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-2b660cb835", "pod":"csi-node-driver-tttw8", "timestamp":"2025-04-30 00:39:16.286925412 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.316 [INFO][4952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.376 [INFO][4952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.376 [INFO][4952] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.379 [INFO][4952] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.389 [INFO][4952] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.403 [INFO][4952] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.406 [INFO][4952] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.410 [INFO][4952] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.410 [INFO][4952] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.414 [INFO][4952] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.426 [INFO][4952] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.439 [INFO][4952] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.3/26] block=192.168.1.0/26 handle="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.439 [INFO][4952] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.3/26] handle="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.439 [INFO][4952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:16.501259 containerd[1816]: 2025-04-30 00:39:16.439 [INFO][4952] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.3/26] IPv6=[] ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" HandleID="k8s-pod-network.9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:16.503285 containerd[1816]: 2025-04-30 00:39:16.449 [INFO][4909] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb05636f-1a4f-4911-8c79-8727fa131132", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"csi-node-driver-tttw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ecd5eaa4f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.503285 containerd[1816]: 2025-04-30 00:39:16.449 [INFO][4909] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.3/32] ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:16.503285 containerd[1816]: 2025-04-30 00:39:16.449 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ecd5eaa4f5 ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:16.503285 containerd[1816]: 2025-04-30 00:39:16.468 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:16.503285 containerd[1816]: 2025-04-30 00:39:16.470 [INFO][4909] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb05636f-1a4f-4911-8c79-8727fa131132", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b", Pod:"csi-node-driver-tttw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ecd5eaa4f5", MAC:"b2:e1:ff:7f:3f:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.503285 containerd[1816]: 2025-04-30 00:39:16.491 [INFO][4909] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b" Namespace="calico-system" Pod="csi-node-driver-tttw8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:16.526912 containerd[1816]: time="2025-04-30T00:39:16.526467264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:16.527065 containerd[1816]: time="2025-04-30T00:39:16.526824463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:16.532435 containerd[1816]: time="2025-04-30T00:39:16.531737449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:16.532435 containerd[1816]: time="2025-04-30T00:39:16.531795609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:16.532435 containerd[1816]: time="2025-04-30T00:39:16.531810128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.532435 containerd[1816]: time="2025-04-30T00:39:16.531918208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.534514 containerd[1816]: time="2025-04-30T00:39:16.534157282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.534863 containerd[1816]: time="2025-04-30T00:39:16.534770120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.554507 systemd-networkd[1368]: cali2930b9e6b02: Link UP Apr 30 00:39:16.555154 systemd-networkd[1368]: cali2930b9e6b02: Gained carrier Apr 30 00:39:16.562594 containerd[1816]: time="2025-04-30T00:39:16.560427684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:16.562594 containerd[1816]: time="2025-04-30T00:39:16.560841963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:16.562594 containerd[1816]: time="2025-04-30T00:39:16.560861243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.562594 containerd[1816]: time="2025-04-30T00:39:16.560959242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.193 [INFO][4920] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0 calico-kube-controllers-588cb9568f- calico-system 6faae125-ceaa-469a-865a-339ba3fb0fe3 778 0 2025-04-30 00:38:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:588cb9568f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 calico-kube-controllers-588cb9568f-qkvlw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2930b9e6b02 [] []}} ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.194 [INFO][4920] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.314 [INFO][4946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.333 [INFO][4946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-2b660cb835", "pod":"calico-kube-controllers-588cb9568f-qkvlw", "timestamp":"2025-04-30 00:39:16.313476414 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.333 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.440 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.440 [INFO][4946] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.443 [INFO][4946] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.461 [INFO][4946] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.478 [INFO][4946] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.491 [INFO][4946] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.495 [INFO][4946] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.496 [INFO][4946] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.499 [INFO][4946] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.506 [INFO][4946] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.535 [INFO][4946] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.4/26] block=192.168.1.0/26 handle="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.535 [INFO][4946] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.4/26] handle="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.535 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:16.602506 containerd[1816]: 2025-04-30 00:39:16.535 [INFO][4946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.4/26] IPv6=[] ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:16.603485 containerd[1816]: 2025-04-30 00:39:16.545 [INFO][4920] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0", GenerateName:"calico-kube-controllers-588cb9568f-", Namespace:"calico-system", SelfLink:"", UID:"6faae125-ceaa-469a-865a-339ba3fb0fe3", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588cb9568f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"calico-kube-controllers-588cb9568f-qkvlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2930b9e6b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.603485 containerd[1816]: 2025-04-30 00:39:16.546 [INFO][4920] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.4/32] ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:16.603485 containerd[1816]: 2025-04-30 00:39:16.546 [INFO][4920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2930b9e6b02 ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:16.603485 containerd[1816]: 2025-04-30 00:39:16.559 [INFO][4920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:16.603485 containerd[1816]: 2025-04-30 00:39:16.566 [INFO][4920] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0", GenerateName:"calico-kube-controllers-588cb9568f-", Namespace:"calico-system", SelfLink:"", UID:"6faae125-ceaa-469a-865a-339ba3fb0fe3", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588cb9568f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d", Pod:"calico-kube-controllers-588cb9568f-qkvlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2930b9e6b02", MAC:"f6:a9:cb:fd:52:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:16.603485 containerd[1816]: 2025-04-30 00:39:16.594 [INFO][4920] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Namespace="calico-system" Pod="calico-kube-controllers-588cb9568f-qkvlw" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:16.633647 systemd-networkd[1368]: vxlan.calico: Link UP Apr 30 00:39:16.633654 systemd-networkd[1368]: vxlan.calico: Gained carrier Apr 30 00:39:16.684140 containerd[1816]: time="2025-04-30T00:39:16.683906199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bmpmw,Uid:b9f44dcb-0a4c-49fb-98eb-8a8269048d60,Namespace:kube-system,Attempt:1,} returns sandbox id \"2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26\"" Apr 30 00:39:16.688731 containerd[1816]: time="2025-04-30T00:39:16.688486586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tttw8,Uid:eb05636f-1a4f-4911-8c79-8727fa131132,Namespace:calico-system,Attempt:1,} returns sandbox id \"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b\"" Apr 30 00:39:16.694411 containerd[1816]: time="2025-04-30T00:39:16.694388408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:39:16.695579 containerd[1816]: time="2025-04-30T00:39:16.695105246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:16.695579 containerd[1816]: time="2025-04-30T00:39:16.695158566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:16.695579 containerd[1816]: time="2025-04-30T00:39:16.695182766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.695579 containerd[1816]: time="2025-04-30T00:39:16.695256366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:16.706492 containerd[1816]: time="2025-04-30T00:39:16.705881694Z" level=info msg="CreateContainer within sandbox \"2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:39:16.774938 containerd[1816]: time="2025-04-30T00:39:16.774896170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-kmccd,Uid:8524bcc8-ef18-4787-b5da-973e0f7abb0b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\"" Apr 30 00:39:16.847313 containerd[1816]: time="2025-04-30T00:39:16.847231317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588cb9568f-qkvlw,Uid:6faae125-ceaa-469a-865a-339ba3fb0fe3,Namespace:calico-system,Attempt:1,} returns sandbox id \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\"" Apr 30 00:39:17.215873 containerd[1816]: time="2025-04-30T00:39:17.215794508Z" level=info msg="CreateContainer within sandbox \"2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e389102b4eeac350dee1519e0a59af21850d7f9caa9eac38bcff9be56636c46\"" Apr 30 00:39:17.216715 containerd[1816]: time="2025-04-30T00:39:17.216551705Z" level=info msg="StartContainer for \"2e389102b4eeac350dee1519e0a59af21850d7f9caa9eac38bcff9be56636c46\"" Apr 30 00:39:17.280847 containerd[1816]: time="2025-04-30T00:39:17.280804035Z" level=info msg="StartContainer for \"2e389102b4eeac350dee1519e0a59af21850d7f9caa9eac38bcff9be56636c46\" returns successfully" Apr 30 00:39:17.596529 systemd-networkd[1368]: calif40acfe022b: Gained IPv6LL Apr 30 00:39:17.788580 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Apr 30 00:39:17.853386 systemd-networkd[1368]: cali6ecd5eaa4f5: Gained IPv6LL Apr 30 00:39:17.878503 kubelet[3367]: I0430 00:39:17.878268 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bmpmw" podStartSLOduration=45.87825035 podStartE2EDuration="45.87825035s" podCreationTimestamp="2025-04-30 00:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:17.8546659 +0000 UTC m=+59.365938897" watchObservedRunningTime="2025-04-30 00:39:17.87825035 +0000 UTC m=+59.389523307" Apr 30 00:39:18.228774 containerd[1816]: time="2025-04-30T00:39:18.228704355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:18.232290 containerd[1816]: time="2025-04-30T00:39:18.232032585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:39:18.236914 systemd-networkd[1368]: cali2930b9e6b02: Gained IPv6LL Apr 30 00:39:18.239193 containerd[1816]: time="2025-04-30T00:39:18.238343766Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:18.244415 containerd[1816]: time="2025-04-30T00:39:18.244353389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:18.245093 containerd[1816]: time="2025-04-30T00:39:18.244962627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.55029662s" Apr 30 00:39:18.245093 containerd[1816]: time="2025-04-30T00:39:18.244996667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:39:18.246419 containerd[1816]: time="2025-04-30T00:39:18.246165583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:39:18.248612 containerd[1816]: time="2025-04-30T00:39:18.248551776Z" level=info msg="CreateContainer within sandbox \"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:39:18.300557 systemd-networkd[1368]: cali1825527804f: Gained IPv6LL Apr 30 00:39:18.304318 containerd[1816]: time="2025-04-30T00:39:18.304277811Z" level=info msg="CreateContainer within sandbox \"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"caabe3f170279d556fc138746d652a7a88eab9f6d489e706b09ef8630ae89697\"" Apr 30 00:39:18.307117 containerd[1816]: time="2025-04-30T00:39:18.304788930Z" level=info msg="StartContainer for \"caabe3f170279d556fc138746d652a7a88eab9f6d489e706b09ef8630ae89697\"" Apr 30 00:39:18.357388 containerd[1816]: time="2025-04-30T00:39:18.357325815Z" level=info msg="StartContainer for \"caabe3f170279d556fc138746d652a7a88eab9f6d489e706b09ef8630ae89697\" returns successfully" Apr 30 00:39:18.601444 containerd[1816]: time="2025-04-30T00:39:18.601320494Z" level=info msg="StopPodSandbox for \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\"" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.637 [WARNING][5349] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9f44dcb-0a4c-49fb-98eb-8a8269048d60", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26", Pod:"coredns-7db6d8ff4d-bmpmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1825527804f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.637 [INFO][5349] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.637 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" iface="eth0" netns="" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.637 [INFO][5349] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.637 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.657 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.657 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.657 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.666 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.666 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.667 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:18.670013 containerd[1816]: 2025-04-30 00:39:18.668 [INFO][5349] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.670891 containerd[1816]: time="2025-04-30T00:39:18.670050651Z" level=info msg="TearDown network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\" successfully" Apr 30 00:39:18.670891 containerd[1816]: time="2025-04-30T00:39:18.670073611Z" level=info msg="StopPodSandbox for \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\" returns successfully" Apr 30 00:39:18.671308 containerd[1816]: time="2025-04-30T00:39:18.671083688Z" level=info msg="RemovePodSandbox for \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\"" Apr 30 00:39:18.671308 containerd[1816]: time="2025-04-30T00:39:18.671113648Z" level=info msg="Forcibly stopping sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\"" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.703 [WARNING][5375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b9f44dcb-0a4c-49fb-98eb-8a8269048d60", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"2dfc04725164779a5aafd667322289a3c0f3b1b93dad7cebfd7cbf211af77b26", Pod:"coredns-7db6d8ff4d-bmpmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1825527804f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.703 [INFO][5375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.703 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" iface="eth0" netns="" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.703 [INFO][5375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.703 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.725 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.725 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.725 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.734 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.734 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" HandleID="k8s-pod-network.390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--bmpmw-eth0" Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.739 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:18.741981 containerd[1816]: 2025-04-30 00:39:18.740 [INFO][5375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca" Apr 30 00:39:18.742482 containerd[1816]: time="2025-04-30T00:39:18.742008278Z" level=info msg="TearDown network for sandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\" successfully" Apr 30 00:39:18.770050 containerd[1816]: time="2025-04-30T00:39:18.770001035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:39:18.770308 containerd[1816]: time="2025-04-30T00:39:18.770079995Z" level=info msg="RemovePodSandbox \"390b9dc4b150efededf972e04379297936ca012d6fd638b53a284c01144ce7ca\" returns successfully" Apr 30 00:39:18.770635 containerd[1816]: time="2025-04-30T00:39:18.770595434Z" level=info msg="StopPodSandbox for \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\"" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.806 [WARNING][5400] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0", GenerateName:"calico-apiserver-7688785779-", Namespace:"calico-apiserver", SelfLink:"", UID:"8524bcc8-ef18-4787-b5da-973e0f7abb0b", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688785779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464", Pod:"calico-apiserver-7688785779-kmccd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif40acfe022b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.806 [INFO][5400] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.806 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" iface="eth0" netns="" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.806 [INFO][5400] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.806 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.828 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.828 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.828 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.836 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.836 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.837 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:18.843384 containerd[1816]: 2025-04-30 00:39:18.839 [INFO][5400] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.844588 containerd[1816]: time="2025-04-30T00:39:18.843580418Z" level=info msg="TearDown network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\" successfully" Apr 30 00:39:18.844588 containerd[1816]: time="2025-04-30T00:39:18.843609578Z" level=info msg="StopPodSandbox for \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\" returns successfully" Apr 30 00:39:18.844588 containerd[1816]: time="2025-04-30T00:39:18.844099577Z" level=info msg="RemovePodSandbox for \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\"" Apr 30 00:39:18.844588 containerd[1816]: time="2025-04-30T00:39:18.844123016Z" level=info msg="Forcibly stopping sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\"" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.881 [WARNING][5425] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0", GenerateName:"calico-apiserver-7688785779-", Namespace:"calico-apiserver", SelfLink:"", UID:"8524bcc8-ef18-4787-b5da-973e0f7abb0b", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688785779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464", Pod:"calico-apiserver-7688785779-kmccd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif40acfe022b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.881 [INFO][5425] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.881 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" iface="eth0" netns="" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.881 [INFO][5425] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.881 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.900 [INFO][5432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.900 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.900 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.909 [WARNING][5432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.909 [INFO][5432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" HandleID="k8s-pod-network.5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.910 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:18.914064 containerd[1816]: 2025-04-30 00:39:18.911 [INFO][5425] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a" Apr 30 00:39:18.914064 containerd[1816]: time="2025-04-30T00:39:18.913195332Z" level=info msg="TearDown network for sandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\" successfully" Apr 30 00:39:18.924398 containerd[1816]: time="2025-04-30T00:39:18.924344819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:39:18.924496 containerd[1816]: time="2025-04-30T00:39:18.924422739Z" level=info msg="RemovePodSandbox \"5d9cfeb259fc1af207d562252ef163c695ce04ddd645ee6cf531fd93397aed8a\" returns successfully" Apr 30 00:39:18.925134 containerd[1816]: time="2025-04-30T00:39:18.924903418Z" level=info msg="StopPodSandbox for \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\"" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.961 [WARNING][5451] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb05636f-1a4f-4911-8c79-8727fa131132", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b", Pod:"csi-node-driver-tttw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ecd5eaa4f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.961 [INFO][5451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.961 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" iface="eth0" netns="" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.961 [INFO][5451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.961 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.981 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.982 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.982 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.990 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.990 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.991 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:18.994644 containerd[1816]: 2025-04-30 00:39:18.993 [INFO][5451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:18.995194 containerd[1816]: time="2025-04-30T00:39:18.995068170Z" level=info msg="TearDown network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\" successfully" Apr 30 00:39:18.995194 containerd[1816]: time="2025-04-30T00:39:18.995094130Z" level=info msg="StopPodSandbox for \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\" returns successfully" Apr 30 00:39:18.996338 containerd[1816]: time="2025-04-30T00:39:18.996294527Z" level=info msg="RemovePodSandbox for \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\"" Apr 30 00:39:18.996652 containerd[1816]: time="2025-04-30T00:39:18.996431526Z" level=info msg="Forcibly stopping sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\"" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.032 [WARNING][5476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb05636f-1a4f-4911-8c79-8727fa131132", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b", Pod:"csi-node-driver-tttw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ecd5eaa4f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.032 [INFO][5476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.032 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" iface="eth0" netns="" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.032 [INFO][5476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.032 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.050 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.050 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.050 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.059 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.060 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" HandleID="k8s-pod-network.66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Workload="ci--4081.3.3--a--2b660cb835-k8s-csi--node--driver--tttw8-eth0" Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.061 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:19.064355 containerd[1816]: 2025-04-30 00:39:19.063 [INFO][5476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672" Apr 30 00:39:19.064786 containerd[1816]: time="2025-04-30T00:39:19.064419966Z" level=info msg="TearDown network for sandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\" successfully" Apr 30 00:39:19.089112 containerd[1816]: time="2025-04-30T00:39:19.089069013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:39:19.089223 containerd[1816]: time="2025-04-30T00:39:19.089134573Z" level=info msg="RemovePodSandbox \"66dbb4e86a089b24c20d464ded2fbe04512911241828fdeeab20c65926847672\" returns successfully" Apr 30 00:39:19.089574 containerd[1816]: time="2025-04-30T00:39:19.089547931Z" level=info msg="StopPodSandbox for \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\"" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.122 [WARNING][5502] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0", GenerateName:"calico-kube-controllers-588cb9568f-", Namespace:"calico-system", SelfLink:"", UID:"6faae125-ceaa-469a-865a-339ba3fb0fe3", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588cb9568f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d", Pod:"calico-kube-controllers-588cb9568f-qkvlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2930b9e6b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.123 [INFO][5502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.123 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" iface="eth0" netns="" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.123 [INFO][5502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.123 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.141 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.141 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.141 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.149 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.150 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.151 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:19.154149 containerd[1816]: 2025-04-30 00:39:19.152 [INFO][5502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.155176 containerd[1816]: time="2025-04-30T00:39:19.154186980Z" level=info msg="TearDown network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\" successfully" Apr 30 00:39:19.155176 containerd[1816]: time="2025-04-30T00:39:19.154209740Z" level=info msg="StopPodSandbox for \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\" returns successfully" Apr 30 00:39:19.155176 containerd[1816]: time="2025-04-30T00:39:19.154644579Z" level=info msg="RemovePodSandbox for \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\"" Apr 30 00:39:19.155176 containerd[1816]: time="2025-04-30T00:39:19.154673019Z" level=info msg="Forcibly stopping sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\"" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.188 [WARNING][5527] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0", GenerateName:"calico-kube-controllers-588cb9568f-", Namespace:"calico-system", SelfLink:"", UID:"6faae125-ceaa-469a-865a-339ba3fb0fe3", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588cb9568f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d", Pod:"calico-kube-controllers-588cb9568f-qkvlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2930b9e6b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.188 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.188 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" iface="eth0" netns="" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.188 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.188 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.209 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.209 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.209 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.217 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.217 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" HandleID="k8s-pod-network.2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.218 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:19.221517 containerd[1816]: 2025-04-30 00:39:19.220 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2" Apr 30 00:39:19.222486 containerd[1816]: time="2025-04-30T00:39:19.221959540Z" level=info msg="TearDown network for sandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\" successfully" Apr 30 00:39:19.235067 containerd[1816]: time="2025-04-30T00:39:19.235014142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:39:19.235534 containerd[1816]: time="2025-04-30T00:39:19.235077661Z" level=info msg="RemovePodSandbox \"2c67b6ca63203e44557e9850c25dfccada9407fb6eb1be14388296ace2d7bab2\" returns successfully" Apr 30 00:39:20.194416 containerd[1816]: time="2025-04-30T00:39:20.194092828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:20.196694 containerd[1816]: time="2025-04-30T00:39:20.196593460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:39:20.201197 containerd[1816]: time="2025-04-30T00:39:20.201153207Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:20.207911 containerd[1816]: time="2025-04-30T00:39:20.207847027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:20.208782 containerd[1816]: time="2025-04-30T00:39:20.208672225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.962449002s" Apr 30 00:39:20.208782 containerd[1816]: time="2025-04-30T00:39:20.208703065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:39:20.210227 containerd[1816]: time="2025-04-30T00:39:20.210044701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:39:20.211192 containerd[1816]: time="2025-04-30T00:39:20.211157537Z" level=info msg="CreateContainer within sandbox \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:39:20.253926 containerd[1816]: time="2025-04-30T00:39:20.253886451Z" level=info msg="CreateContainer within sandbox \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\"" Apr 30 00:39:20.254630 containerd[1816]: time="2025-04-30T00:39:20.254441129Z" level=info msg="StartContainer for \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\"" Apr 30 00:39:20.309583 containerd[1816]: time="2025-04-30T00:39:20.309501647Z" level=info msg="StartContainer for \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\" returns successfully" Apr 30 00:39:20.579984 containerd[1816]: time="2025-04-30T00:39:20.579330970Z" level=info msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\"" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.628 [INFO][5597] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.631 [INFO][5597] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" iface="eth0" netns="/var/run/netns/cni-154f3dc7-7f1d-abdb-7d41-129de939c78e" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.631 [INFO][5597] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" iface="eth0" netns="/var/run/netns/cni-154f3dc7-7f1d-abdb-7d41-129de939c78e" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.631 [INFO][5597] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" iface="eth0" netns="/var/run/netns/cni-154f3dc7-7f1d-abdb-7d41-129de939c78e" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.631 [INFO][5597] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.632 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.666 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.666 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.666 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.675 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.675 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.677 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:20.682158 containerd[1816]: 2025-04-30 00:39:20.680 [INFO][5597] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:39:20.683043 containerd[1816]: time="2025-04-30T00:39:20.682763584Z" level=info msg="TearDown network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" successfully" Apr 30 00:39:20.683043 containerd[1816]: time="2025-04-30T00:39:20.682801624Z" level=info msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" returns successfully" Apr 30 00:39:20.684488 containerd[1816]: time="2025-04-30T00:39:20.683588061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2q77n,Uid:2e0ed15b-c5c4-4942-8821-6ea97e00435b,Namespace:kube-system,Attempt:1,}" Apr 30 00:39:20.687155 systemd[1]: run-netns-cni\x2d154f3dc7\x2d7f1d\x2dabdb\x2d7d41\x2d129de939c78e.mount: Deactivated successfully. Apr 30 00:39:20.834156 systemd-networkd[1368]: calif6b119671f3: Link UP Apr 30 00:39:20.834887 systemd-networkd[1368]: calif6b119671f3: Gained carrier Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.760 [INFO][5610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0 coredns-7db6d8ff4d- kube-system 2e0ed15b-c5c4-4942-8821-6ea97e00435b 838 0 2025-04-30 00:38:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 coredns-7db6d8ff4d-2q77n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif6b119671f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.760 [INFO][5610] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.786 [INFO][5622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" HandleID="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.799 [INFO][5622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" HandleID="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028f110), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-2b660cb835", "pod":"coredns-7db6d8ff4d-2q77n", "timestamp":"2025-04-30 00:39:20.786835436 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.799 [INFO][5622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.799 [INFO][5622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.799 [INFO][5622] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.801 [INFO][5622] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.804 [INFO][5622] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.808 [INFO][5622] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.809 [INFO][5622] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.811 [INFO][5622] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.811 [INFO][5622] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.813 [INFO][5622] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.817 [INFO][5622] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.826 [INFO][5622] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.5/26] block=192.168.1.0/26 handle="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.826 [INFO][5622] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.5/26] handle="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.826 [INFO][5622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:20.867434 containerd[1816]: 2025-04-30 00:39:20.826 [INFO][5622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.5/26] IPv6=[] ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" HandleID="k8s-pod-network.cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.869357 containerd[1816]: 2025-04-30 00:39:20.829 [INFO][5610] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e0ed15b-c5c4-4942-8821-6ea97e00435b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"coredns-7db6d8ff4d-2q77n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6b119671f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:20.869357 containerd[1816]: 2025-04-30 00:39:20.829 [INFO][5610] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.5/32] ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.869357 containerd[1816]: 2025-04-30 00:39:20.829 [INFO][5610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6b119671f3 ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.869357 containerd[1816]: 2025-04-30 00:39:20.834 [INFO][5610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.869357 containerd[1816]: 2025-04-30 00:39:20.835 [INFO][5610] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e0ed15b-c5c4-4942-8821-6ea97e00435b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d", Pod:"coredns-7db6d8ff4d-2q77n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6b119671f3", MAC:"56:d1:4f:12:49:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:20.869357 containerd[1816]: 2025-04-30 00:39:20.856 [INFO][5610] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2q77n" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:39:20.926690 containerd[1816]: time="2025-04-30T00:39:20.926529224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:20.926690 containerd[1816]: time="2025-04-30T00:39:20.926625463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:20.926690 containerd[1816]: time="2025-04-30T00:39:20.926661703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:20.928384 containerd[1816]: time="2025-04-30T00:39:20.927063742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:20.972699 containerd[1816]: time="2025-04-30T00:39:20.972649047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2q77n,Uid:2e0ed15b-c5c4-4942-8821-6ea97e00435b,Namespace:kube-system,Attempt:1,} returns sandbox id \"cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d\"" Apr 30 00:39:20.977150 containerd[1816]: time="2025-04-30T00:39:20.977119354Z" level=info msg="CreateContainer within sandbox \"cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:39:21.027113 containerd[1816]: time="2025-04-30T00:39:21.026939127Z" level=info msg="CreateContainer within sandbox \"cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"545d717d4e529e0dcd42d6cdcc4ce3f02bc21790427f6e1eb1a5cb9202e70ac3\"" Apr 30 00:39:21.027705 containerd[1816]: time="2025-04-30T00:39:21.027679365Z" level=info msg="StartContainer for \"545d717d4e529e0dcd42d6cdcc4ce3f02bc21790427f6e1eb1a5cb9202e70ac3\"" Apr 30 00:39:21.081410 containerd[1816]: time="2025-04-30T00:39:21.081332966Z" level=info msg="StartContainer for \"545d717d4e529e0dcd42d6cdcc4ce3f02bc21790427f6e1eb1a5cb9202e70ac3\" returns successfully" Apr 30 00:39:21.579954 containerd[1816]: time="2025-04-30T00:39:21.579910333Z" level=info msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\"" Apr 30 00:39:21.647379 kubelet[3367]: I0430 00:39:21.647302 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7688785779-kmccd" podStartSLOduration=34.215615714 podStartE2EDuration="37.647283454s" podCreationTimestamp="2025-04-30 00:38:44 +0000 UTC" firstStartedPulling="2025-04-30 00:39:16.777816722 +0000 UTC m=+58.289089719" lastFinishedPulling="2025-04-30 00:39:20.209484462 +0000 UTC m=+61.720757459" observedRunningTime="2025-04-30 00:39:20.885892304 +0000 UTC m=+62.397165301" watchObservedRunningTime="2025-04-30 00:39:21.647283454 +0000 UTC m=+63.158556451" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.645 [INFO][5737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.655 [INFO][5737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" iface="eth0" netns="/var/run/netns/cni-93f1f4bf-637f-daa1-cfb5-539a127cf18a" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.655 [INFO][5737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" iface="eth0" netns="/var/run/netns/cni-93f1f4bf-637f-daa1-cfb5-539a127cf18a" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.655 [INFO][5737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" iface="eth0" netns="/var/run/netns/cni-93f1f4bf-637f-daa1-cfb5-539a127cf18a" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.655 [INFO][5737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.655 [INFO][5737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.692 [INFO][5749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.692 [INFO][5749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.692 [INFO][5749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.705 [WARNING][5749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.706 [INFO][5749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.708 [INFO][5749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:21.715936 containerd[1816]: 2025-04-30 00:39:21.712 [INFO][5737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:39:21.719620 containerd[1816]: time="2025-04-30T00:39:21.716837889Z" level=info msg="TearDown network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" successfully" Apr 30 00:39:21.719620 containerd[1816]: time="2025-04-30T00:39:21.719323081Z" level=info msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" returns successfully" Apr 30 00:39:21.720398 containerd[1816]: time="2025-04-30T00:39:21.720064839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db4fc9d49-vxdt7,Uid:90111c95-ac27-4e82-acee-f83ab191ee0f,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:39:21.721149 systemd[1]: run-netns-cni\x2d93f1f4bf\x2d637f\x2ddaa1\x2dcfb5\x2d539a127cf18a.mount: Deactivated successfully. Apr 30 00:39:21.871899 kubelet[3367]: I0430 00:39:21.871807 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:39:21.909324 kubelet[3367]: I0430 00:39:21.908843 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2q77n" podStartSLOduration=49.908824521 podStartE2EDuration="49.908824521s" podCreationTimestamp="2025-04-30 00:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:21.888399302 +0000 UTC m=+63.399672299" watchObservedRunningTime="2025-04-30 00:39:21.908824521 +0000 UTC m=+63.420097478" Apr 30 00:39:22.304287 containerd[1816]: time="2025-04-30T00:39:22.303568835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:22.307275 containerd[1816]: time="2025-04-30T00:39:22.307247664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:39:22.314497 containerd[1816]: time="2025-04-30T00:39:22.314115764Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:22.325185 containerd[1816]: time="2025-04-30T00:39:22.325152691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:22.326556 containerd[1816]: time="2025-04-30T00:39:22.325946209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.115865388s" Apr 30 00:39:22.326660 containerd[1816]: time="2025-04-30T00:39:22.326637047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:39:22.328208 containerd[1816]: time="2025-04-30T00:39:22.328185762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:39:22.341889 containerd[1816]: time="2025-04-30T00:39:22.341863562Z" level=info msg="CreateContainer within sandbox \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:39:22.399513 containerd[1816]: time="2025-04-30T00:39:22.399470872Z" level=info msg="CreateContainer within sandbox \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\"" Apr 30 00:39:22.401855 containerd[1816]: time="2025-04-30T00:39:22.401578345Z" level=info msg="StartContainer for \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\"" Apr 30 00:39:22.447595 systemd-networkd[1368]: cali6a40a0a16a2: Link UP Apr 30 00:39:22.453303 systemd-networkd[1368]: cali6a40a0a16a2: Gained carrier Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.326 [INFO][5760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0 calico-apiserver-5db4fc9d49- calico-apiserver 90111c95-ac27-4e82-acee-f83ab191ee0f 851 0 2025-04-30 00:38:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5db4fc9d49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 calico-apiserver-5db4fc9d49-vxdt7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a40a0a16a2 [] []}} ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.326 [INFO][5760] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.364 [INFO][5774] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" HandleID="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.383 [INFO][5774] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" HandleID="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-2b660cb835", "pod":"calico-apiserver-5db4fc9d49-vxdt7", "timestamp":"2025-04-30 00:39:22.364934054 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.383 [INFO][5774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.383 [INFO][5774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.383 [INFO][5774] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.385 [INFO][5774] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.389 [INFO][5774] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.404 [INFO][5774] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.409 [INFO][5774] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.412 [INFO][5774] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.412 [INFO][5774] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.415 [INFO][5774] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.421 [INFO][5774] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.437 [INFO][5774] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.6/26] block=192.168.1.0/26 handle="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.437 [INFO][5774] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.6/26] handle="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.437 [INFO][5774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:22.476701 containerd[1816]: 2025-04-30 00:39:22.437 [INFO][5774] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.6/26] IPv6=[] ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" HandleID="k8s-pod-network.eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:22.477280 containerd[1816]: 2025-04-30 00:39:22.443 [INFO][5760] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0", GenerateName:"calico-apiserver-5db4fc9d49-", Namespace:"calico-apiserver", SelfLink:"", UID:"90111c95-ac27-4e82-acee-f83ab191ee0f", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db4fc9d49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"calico-apiserver-5db4fc9d49-vxdt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a40a0a16a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:22.477280 containerd[1816]: 2025-04-30 00:39:22.443 [INFO][5760] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.6/32] ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:22.477280 containerd[1816]: 2025-04-30 00:39:22.443 [INFO][5760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a40a0a16a2 ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:22.477280 containerd[1816]: 2025-04-30 00:39:22.447 [INFO][5760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:22.477280 containerd[1816]: 2025-04-30 00:39:22.451 [INFO][5760] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0", GenerateName:"calico-apiserver-5db4fc9d49-", Namespace:"calico-apiserver", SelfLink:"", UID:"90111c95-ac27-4e82-acee-f83ab191ee0f", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db4fc9d49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae", Pod:"calico-apiserver-5db4fc9d49-vxdt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a40a0a16a2", MAC:"ea:ee:55:94:f6:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:22.477280 containerd[1816]: 2025-04-30 00:39:22.473 [INFO][5760] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-vxdt7" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:39:22.499358 containerd[1816]: time="2025-04-30T00:39:22.498840058Z" level=info msg="StartContainer for \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\" returns successfully" Apr 30 00:39:22.511932 containerd[1816]: time="2025-04-30T00:39:22.511478421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:22.511932 containerd[1816]: time="2025-04-30T00:39:22.511528181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:22.511932 containerd[1816]: time="2025-04-30T00:39:22.511551860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:22.511932 containerd[1816]: time="2025-04-30T00:39:22.511624420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:22.526148 systemd-networkd[1368]: calif6b119671f3: Gained IPv6LL Apr 30 00:39:22.557508 containerd[1816]: time="2025-04-30T00:39:22.557329285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db4fc9d49-vxdt7,Uid:90111c95-ac27-4e82-acee-f83ab191ee0f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae\"" Apr 30 00:39:22.562057 containerd[1816]: time="2025-04-30T00:39:22.562022231Z" level=info msg="CreateContainer within sandbox \"eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:39:22.581201 containerd[1816]: time="2025-04-30T00:39:22.580494857Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" Apr 30 00:39:22.617783 containerd[1816]: time="2025-04-30T00:39:22.617741467Z" level=info msg="CreateContainer within sandbox \"eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"faa88a1f3eeeaa1f8293e58a4a5f3a59ff10ff2c3fa93d139cc1599fe7387fb9\"" Apr 30 00:39:22.618573 containerd[1816]: time="2025-04-30T00:39:22.618515264Z" level=info msg="StartContainer for \"faa88a1f3eeeaa1f8293e58a4a5f3a59ff10ff2c3fa93d139cc1599fe7387fb9\"" Apr 30 00:39:22.685569 containerd[1816]: time="2025-04-30T00:39:22.685427867Z" level=info msg="StartContainer for \"faa88a1f3eeeaa1f8293e58a4a5f3a59ff10ff2c3fa93d139cc1599fe7387fb9\" returns successfully" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.641 [INFO][5889] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.641 [INFO][5889] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" iface="eth0" netns="/var/run/netns/cni-86fddac1-6449-373d-75a2-b2fa9ea0e2da" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.642 [INFO][5889] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" iface="eth0" netns="/var/run/netns/cni-86fddac1-6449-373d-75a2-b2fa9ea0e2da" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.642 [INFO][5889] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" iface="eth0" netns="/var/run/netns/cni-86fddac1-6449-373d-75a2-b2fa9ea0e2da" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.643 [INFO][5889] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.643 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.671 [INFO][5915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.672 [INFO][5915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.672 [INFO][5915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.681 [WARNING][5915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.683 [INFO][5915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.684 [INFO][5915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:22.689821 containerd[1816]: 2025-04-30 00:39:22.687 [INFO][5889] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:22.690513 containerd[1816]: time="2025-04-30T00:39:22.690017333Z" level=info msg="TearDown network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" successfully" Apr 30 00:39:22.690513 containerd[1816]: time="2025-04-30T00:39:22.690039533Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" returns successfully" Apr 30 00:39:22.690616 containerd[1816]: time="2025-04-30T00:39:22.690578812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-95fgn,Uid:9a6c46b4-3687-469f-a6e7-86919697db2e,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:39:22.882226 systemd-networkd[1368]: cali4e73823ca6a: Link UP Apr 30 00:39:22.882388 systemd-networkd[1368]: cali4e73823ca6a: Gained carrier Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.784 [INFO][5941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0 calico-apiserver-7688785779- calico-apiserver 9a6c46b4-3687-469f-a6e7-86919697db2e 872 0 2025-04-30 00:38:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7688785779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 calico-apiserver-7688785779-95fgn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e73823ca6a [] []}} ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.785 [INFO][5941] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.819 [INFO][5952] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.836 [INFO][5952] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-2b660cb835", "pod":"calico-apiserver-7688785779-95fgn", "timestamp":"2025-04-30 00:39:22.81963895 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.836 [INFO][5952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.836 [INFO][5952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.836 [INFO][5952] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.838 [INFO][5952] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.841 [INFO][5952] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.845 [INFO][5952] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.853 [INFO][5952] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.855 [INFO][5952] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.855 [INFO][5952] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.857 [INFO][5952] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643 Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.862 [INFO][5952] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.871 [INFO][5952] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.7/26] block=192.168.1.0/26 handle="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.871 [INFO][5952] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.7/26] handle="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.871 [INFO][5952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:22.900566 containerd[1816]: 2025-04-30 00:39:22.871 [INFO][5952] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.7/26] IPv6=[] ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.902004 containerd[1816]: 2025-04-30 00:39:22.875 [INFO][5941] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0", GenerateName:"calico-apiserver-7688785779-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a6c46b4-3687-469f-a6e7-86919697db2e", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688785779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"calico-apiserver-7688785779-95fgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e73823ca6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:22.902004 containerd[1816]: 2025-04-30 00:39:22.875 [INFO][5941] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.7/32] ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.902004 containerd[1816]: 2025-04-30 00:39:22.875 [INFO][5941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e73823ca6a ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.902004 containerd[1816]: 2025-04-30 00:39:22.878 [INFO][5941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.902004 containerd[1816]: 2025-04-30 00:39:22.878 [INFO][5941] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0", GenerateName:"calico-apiserver-7688785779-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a6c46b4-3687-469f-a6e7-86919697db2e", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688785779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643", Pod:"calico-apiserver-7688785779-95fgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e73823ca6a", MAC:"b2:3e:02:60:a1:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:22.902004 containerd[1816]: 2025-04-30 00:39:22.893 [INFO][5941] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Namespace="calico-apiserver" Pod="calico-apiserver-7688785779-95fgn" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:22.952156 kubelet[3367]: I0430 00:39:22.949767 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5db4fc9d49-vxdt7" podStartSLOduration=36.949750006 podStartE2EDuration="36.949750006s" podCreationTimestamp="2025-04-30 00:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:22.922100007 +0000 UTC m=+64.433373044" watchObservedRunningTime="2025-04-30 00:39:22.949750006 +0000 UTC m=+64.461023003" Apr 30 00:39:22.966827 containerd[1816]: time="2025-04-30T00:39:22.966213997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:22.966827 containerd[1816]: time="2025-04-30T00:39:22.966603076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:22.966827 containerd[1816]: time="2025-04-30T00:39:22.966623356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:22.968138 containerd[1816]: time="2025-04-30T00:39:22.966891395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:23.033837 containerd[1816]: time="2025-04-30T00:39:23.033716478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688785779-95fgn,Uid:9a6c46b4-3687-469f-a6e7-86919697db2e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\"" Apr 30 00:39:23.038505 containerd[1816]: time="2025-04-30T00:39:23.038345064Z" level=info msg="CreateContainer within sandbox \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:39:23.043460 kubelet[3367]: I0430 00:39:23.043258 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-588cb9568f-qkvlw" podStartSLOduration=32.564831077 podStartE2EDuration="38.04324121s" podCreationTimestamp="2025-04-30 00:38:45 +0000 UTC" firstStartedPulling="2025-04-30 00:39:16.849095311 +0000 UTC m=+58.360368308" lastFinishedPulling="2025-04-30 00:39:22.327505444 +0000 UTC m=+63.838778441" observedRunningTime="2025-04-30 00:39:22.950178365 +0000 UTC m=+64.461451362" watchObservedRunningTime="2025-04-30 00:39:23.04324121 +0000 UTC m=+64.554514207" Apr 30 00:39:23.122397 containerd[1816]: time="2025-04-30T00:39:23.121759294Z" level=info msg="CreateContainer within sandbox \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\"" Apr 30 00:39:23.122511 containerd[1816]: time="2025-04-30T00:39:23.122490852Z" level=info msg="StartContainer for \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\"" Apr 30 00:39:23.187300 containerd[1816]: time="2025-04-30T00:39:23.186693738Z" level=info msg="StartContainer for \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\" returns successfully" Apr 30 00:39:23.271749 systemd[1]: run-netns-cni\x2d86fddac1\x2d6449\x2d373d\x2d75a2\x2db2fa9ea0e2da.mount: Deactivated successfully. Apr 30 00:39:23.485513 systemd-networkd[1368]: cali6a40a0a16a2: Gained IPv6LL Apr 30 00:39:24.029487 kubelet[3367]: I0430 00:39:24.029429 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7688785779-95fgn" podStartSLOduration=40.029414556 podStartE2EDuration="40.029414556s" podCreationTimestamp="2025-04-30 00:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:23.939779586 +0000 UTC m=+65.451052583" watchObservedRunningTime="2025-04-30 00:39:24.029414556 +0000 UTC m=+65.540687553" Apr 30 00:39:24.508714 systemd-networkd[1368]: cali4e73823ca6a: Gained IPv6LL Apr 30 00:39:24.656282 containerd[1816]: time="2025-04-30T00:39:24.656226706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:24.658164 kubelet[3367]: I0430 00:39:24.656120 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:39:24.661204 containerd[1816]: time="2025-04-30T00:39:24.660594292Z" level=info msg="StopContainer for \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\" with timeout 30 (s)" Apr 30 00:39:24.661204 containerd[1816]: time="2025-04-30T00:39:24.660668572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:39:24.661204 containerd[1816]: time="2025-04-30T00:39:24.660987411Z" level=info msg="Stop container \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\" with signal terminated" Apr 30 00:39:24.666420 containerd[1816]: time="2025-04-30T00:39:24.665770557Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:24.685564 containerd[1816]: time="2025-04-30T00:39:24.684950419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:24.685990 containerd[1816]: time="2025-04-30T00:39:24.685957376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 2.357508454s" Apr 30 00:39:24.686043 containerd[1816]: time="2025-04-30T00:39:24.685990096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:39:24.691089 containerd[1816]: time="2025-04-30T00:39:24.691048841Z" level=info msg="CreateContainer within sandbox \"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:39:24.707946 kubelet[3367]: I0430 00:39:24.707231 3367 topology_manager.go:215] "Topology Admit Handler" podUID="271ba7b9-d875-4a7d-ba9b-40abe3cb5a38" podNamespace="calico-apiserver" podName="calico-apiserver-5db4fc9d49-j5gj8" Apr 30 00:39:24.763481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995-rootfs.mount: Deactivated successfully. Apr 30 00:39:24.816671 kubelet[3367]: I0430 00:39:24.816631 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/271ba7b9-d875-4a7d-ba9b-40abe3cb5a38-calico-apiserver-certs\") pod \"calico-apiserver-5db4fc9d49-j5gj8\" (UID: \"271ba7b9-d875-4a7d-ba9b-40abe3cb5a38\") " pod="calico-apiserver/calico-apiserver-5db4fc9d49-j5gj8" Apr 30 00:39:24.853571 kubelet[3367]: I0430 00:39:24.816700 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcq7\" (UniqueName: \"kubernetes.io/projected/271ba7b9-d875-4a7d-ba9b-40abe3cb5a38-kube-api-access-kqcq7\") pod \"calico-apiserver-5db4fc9d49-j5gj8\" (UID: \"271ba7b9-d875-4a7d-ba9b-40abe3cb5a38\") " pod="calico-apiserver/calico-apiserver-5db4fc9d49-j5gj8" Apr 30 00:39:24.872810 containerd[1816]: time="2025-04-30T00:39:24.872339094Z" level=info msg="shim disconnected" id=79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995 namespace=k8s.io Apr 30 00:39:24.872810 containerd[1816]: time="2025-04-30T00:39:24.872430533Z" level=warning msg="cleaning up after shim disconnected" id=79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995 namespace=k8s.io Apr 30 00:39:24.872810 containerd[1816]: time="2025-04-30T00:39:24.872439973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:24.882402 containerd[1816]: time="2025-04-30T00:39:24.882164904Z" level=info msg="CreateContainer within sandbox \"9f1930368877bd2ce70f321089a2b308034f1fc991884c991bb4b7f0081d484b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9e6375d7d17afc69bf52a8939241a8b54bc1ae1ba53e83c04b66aa12a73747f4\"" Apr 30 00:39:24.883544 containerd[1816]: time="2025-04-30T00:39:24.883037861Z" level=info msg="StartContainer for \"9e6375d7d17afc69bf52a8939241a8b54bc1ae1ba53e83c04b66aa12a73747f4\"" Apr 30 00:39:24.896410 containerd[1816]: time="2025-04-30T00:39:24.894618187Z" level=info msg="StopContainer for \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\" returns successfully" Apr 30 00:39:24.897060 containerd[1816]: time="2025-04-30T00:39:24.896917700Z" level=info msg="StopPodSandbox for \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\"" Apr 30 00:39:24.897060 containerd[1816]: time="2025-04-30T00:39:24.896968059Z" level=info msg="Container to stop \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:39:24.906135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464-shm.mount: Deactivated successfully. Apr 30 00:39:24.959005 containerd[1816]: time="2025-04-30T00:39:24.958607394Z" level=info msg="shim disconnected" id=1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464 namespace=k8s.io Apr 30 00:39:24.959005 containerd[1816]: time="2025-04-30T00:39:24.959005112Z" level=warning msg="cleaning up after shim disconnected" id=1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464 namespace=k8s.io Apr 30 00:39:24.959005 containerd[1816]: time="2025-04-30T00:39:24.959016872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:24.999636 containerd[1816]: time="2025-04-30T00:39:24.999292271Z" level=info msg="StartContainer for \"9e6375d7d17afc69bf52a8939241a8b54bc1ae1ba53e83c04b66aa12a73747f4\" returns successfully" Apr 30 00:39:25.016348 containerd[1816]: time="2025-04-30T00:39:25.015740821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db4fc9d49-j5gj8,Uid:271ba7b9-d875-4a7d-ba9b-40abe3cb5a38,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:39:25.067289 systemd-networkd[1368]: calif40acfe022b: Link DOWN Apr 30 00:39:25.067299 systemd-networkd[1368]: calif40acfe022b: Lost carrier Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.062 [INFO][6232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.062 [INFO][6232] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" iface="eth0" netns="/var/run/netns/cni-2d32301f-43c2-b27d-e1e5-4177b6f2a13b" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.063 [INFO][6232] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" iface="eth0" netns="/var/run/netns/cni-2d32301f-43c2-b27d-e1e5-4177b6f2a13b" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.075 [INFO][6232] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" after=13.23332ms iface="eth0" netns="/var/run/netns/cni-2d32301f-43c2-b27d-e1e5-4177b6f2a13b" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.075 [INFO][6232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.075 [INFO][6232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.132 [INFO][6245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.132 [INFO][6245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.132 [INFO][6245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.188 [INFO][6245] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.188 [INFO][6245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.190 [INFO][6245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:25.194041 containerd[1816]: 2025-04-30 00:39:25.191 [INFO][6232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:39:25.195214 containerd[1816]: time="2025-04-30T00:39:25.195163000Z" level=info msg="TearDown network for sandbox \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" successfully" Apr 30 00:39:25.195214 containerd[1816]: time="2025-04-30T00:39:25.195194440Z" level=info msg="StopPodSandbox for \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" returns successfully" Apr 30 00:39:25.220815 kubelet[3367]: I0430 00:39:25.220754 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhrcf\" (UniqueName: \"kubernetes.io/projected/8524bcc8-ef18-4787-b5da-973e0f7abb0b-kube-api-access-xhrcf\") pod \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\" (UID: \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\") " Apr 30 00:39:25.220815 kubelet[3367]: I0430 00:39:25.220800 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8524bcc8-ef18-4787-b5da-973e0f7abb0b-calico-apiserver-certs\") pod \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\" (UID: \"8524bcc8-ef18-4787-b5da-973e0f7abb0b\") " Apr 30 00:39:25.231729 kubelet[3367]: I0430 00:39:25.231430 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8524bcc8-ef18-4787-b5da-973e0f7abb0b-kube-api-access-xhrcf" (OuterVolumeSpecName: "kube-api-access-xhrcf") pod "8524bcc8-ef18-4787-b5da-973e0f7abb0b" (UID: "8524bcc8-ef18-4787-b5da-973e0f7abb0b"). InnerVolumeSpecName "kube-api-access-xhrcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:39:25.231848 kubelet[3367]: I0430 00:39:25.231806 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8524bcc8-ef18-4787-b5da-973e0f7abb0b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "8524bcc8-ef18-4787-b5da-973e0f7abb0b" (UID: "8524bcc8-ef18-4787-b5da-973e0f7abb0b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:39:25.266427 systemd-networkd[1368]: cali96d8376d027: Link UP Apr 30 00:39:25.268569 systemd-networkd[1368]: cali96d8376d027: Gained carrier Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.143 [INFO][6243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0 calico-apiserver-5db4fc9d49- calico-apiserver 271ba7b9-d875-4a7d-ba9b-40abe3cb5a38 923 0 2025-04-30 00:39:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5db4fc9d49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 calico-apiserver-5db4fc9d49-j5gj8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96d8376d027 [] []}} ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.144 [INFO][6243] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.171 [INFO][6263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" HandleID="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.189 [INFO][6263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" HandleID="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-2b660cb835", "pod":"calico-apiserver-5db4fc9d49-j5gj8", "timestamp":"2025-04-30 00:39:25.171480551 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.189 [INFO][6263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.190 [INFO][6263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.190 [INFO][6263] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.192 [INFO][6263] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.196 [INFO][6263] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.205 [INFO][6263] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.211 [INFO][6263] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.224 [INFO][6263] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.224 [INFO][6263] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.227 [INFO][6263] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132 Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.233 [INFO][6263] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.245 [INFO][6263] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.8/26] block=192.168.1.0/26 handle="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.245 [INFO][6263] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.8/26] handle="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.245 [INFO][6263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:25.292124 containerd[1816]: 2025-04-30 00:39:25.245 [INFO][6263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.8/26] IPv6=[] ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" HandleID="k8s-pod-network.1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" Apr 30 00:39:25.292890 containerd[1816]: 2025-04-30 00:39:25.258 [INFO][6243] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0", GenerateName:"calico-apiserver-5db4fc9d49-", Namespace:"calico-apiserver", SelfLink:"", UID:"271ba7b9-d875-4a7d-ba9b-40abe3cb5a38", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db4fc9d49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"calico-apiserver-5db4fc9d49-j5gj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d8376d027", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:25.292890 containerd[1816]: 2025-04-30 00:39:25.259 [INFO][6243] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.8/32] ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" Apr 30 00:39:25.292890 containerd[1816]: 2025-04-30 00:39:25.259 [INFO][6243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96d8376d027 ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" Apr 30 00:39:25.292890 containerd[1816]: 2025-04-30 00:39:25.267 [INFO][6243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" Apr 30 00:39:25.292890 containerd[1816]: 2025-04-30 00:39:25.269 [INFO][6243] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0", GenerateName:"calico-apiserver-5db4fc9d49-", Namespace:"calico-apiserver", SelfLink:"", UID:"271ba7b9-d875-4a7d-ba9b-40abe3cb5a38", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db4fc9d49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132", Pod:"calico-apiserver-5db4fc9d49-j5gj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d8376d027", MAC:"3a:dd:02:a2:da:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:25.292890 containerd[1816]: 2025-04-30 00:39:25.290 [INFO][6243] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132" Namespace="calico-apiserver" Pod="calico-apiserver-5db4fc9d49-j5gj8" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--j5gj8-eth0" Apr 30 00:39:25.321827 kubelet[3367]: I0430 00:39:25.321787 3367 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xhrcf\" (UniqueName: \"kubernetes.io/projected/8524bcc8-ef18-4787-b5da-973e0f7abb0b-kube-api-access-xhrcf\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:25.321827 kubelet[3367]: I0430 00:39:25.321825 3367 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8524bcc8-ef18-4787-b5da-973e0f7abb0b-calico-apiserver-certs\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:25.324110 containerd[1816]: time="2025-04-30T00:39:25.323811732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:25.324110 containerd[1816]: time="2025-04-30T00:39:25.323906372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:25.324110 containerd[1816]: time="2025-04-30T00:39:25.323920572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:25.326016 containerd[1816]: time="2025-04-30T00:39:25.324374570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:25.419733 containerd[1816]: time="2025-04-30T00:39:25.419592603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db4fc9d49-j5gj8,Uid:271ba7b9-d875-4a7d-ba9b-40abe3cb5a38,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132\"" Apr 30 00:39:25.427005 containerd[1816]: time="2025-04-30T00:39:25.426309943Z" level=info msg="CreateContainer within sandbox \"1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:39:25.489006 containerd[1816]: time="2025-04-30T00:39:25.488521075Z" level=info msg="CreateContainer within sandbox \"1e71d4c8dc75d93950f0a16c36d03acee116ff0a8b7abb0a51834d4c87319132\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"999ce816cbff6912dc193f0062d402dba721eb5720dd46ece0e4e08e954d5e7e\"" Apr 30 00:39:25.490251 containerd[1816]: time="2025-04-30T00:39:25.489896751Z" level=info msg="StartContainer for \"999ce816cbff6912dc193f0062d402dba721eb5720dd46ece0e4e08e954d5e7e\"" Apr 30 00:39:25.563558 containerd[1816]: time="2025-04-30T00:39:25.563439289Z" level=info msg="StartContainer for \"999ce816cbff6912dc193f0062d402dba721eb5720dd46ece0e4e08e954d5e7e\" returns successfully" Apr 30 00:39:25.757435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464-rootfs.mount: Deactivated successfully. Apr 30 00:39:25.757583 systemd[1]: run-netns-cni\x2d2d32301f\x2d43c2\x2db27d\x2de1e5\x2d4177b6f2a13b.mount: Deactivated successfully. Apr 30 00:39:25.757664 systemd[1]: var-lib-kubelet-pods-8524bcc8\x2def18\x2d4787\x2db5da\x2d973e0f7abb0b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhrcf.mount: Deactivated successfully. Apr 30 00:39:25.757745 systemd[1]: var-lib-kubelet-pods-8524bcc8\x2def18\x2d4787\x2db5da\x2d973e0f7abb0b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Apr 30 00:39:25.791005 kubelet[3367]: I0430 00:39:25.790975 3367 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:39:25.798610 kubelet[3367]: I0430 00:39:25.798584 3367 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:39:25.942506 kubelet[3367]: I0430 00:39:25.942413 3367 scope.go:117] "RemoveContainer" containerID="79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995" Apr 30 00:39:25.949441 containerd[1816]: time="2025-04-30T00:39:25.948504288Z" level=info msg="RemoveContainer for \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\"" Apr 30 00:39:25.958375 kubelet[3367]: I0430 00:39:25.958182 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5db4fc9d49-j5gj8" podStartSLOduration=1.958166099 podStartE2EDuration="1.958166099s" podCreationTimestamp="2025-04-30 00:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:25.9575713 +0000 UTC m=+67.468844297" watchObservedRunningTime="2025-04-30 00:39:25.958166099 +0000 UTC m=+67.469439096" Apr 30 00:39:25.971796 containerd[1816]: time="2025-04-30T00:39:25.971440259Z" level=info msg="RemoveContainer for \"79f7e68666815ccd69fe46a752cebfe8328dec02f0c5f5e64daf6817c7aa3995\" returns successfully" Apr 30 00:39:26.493531 systemd-networkd[1368]: cali96d8376d027: Gained IPv6LL Apr 30 00:39:26.584379 kubelet[3367]: I0430 00:39:26.582186 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8524bcc8-ef18-4787-b5da-973e0f7abb0b" path="/var/lib/kubelet/pods/8524bcc8-ef18-4787-b5da-973e0f7abb0b/volumes" Apr 30 00:39:26.923010 kubelet[3367]: I0430 00:39:26.921683 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tttw8" podStartSLOduration=33.926131997 podStartE2EDuration="41.921667913s" podCreationTimestamp="2025-04-30 00:38:45 +0000 UTC" firstStartedPulling="2025-04-30 00:39:16.692179335 +0000 UTC m=+58.203452332" lastFinishedPulling="2025-04-30 00:39:24.687715251 +0000 UTC m=+66.198988248" observedRunningTime="2025-04-30 00:39:26.037536579 +0000 UTC m=+67.548809536" watchObservedRunningTime="2025-04-30 00:39:26.921667913 +0000 UTC m=+68.432940910" Apr 30 00:39:26.923435 containerd[1816]: time="2025-04-30T00:39:26.922771069Z" level=info msg="StopContainer for \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\" with timeout 300 (s)" Apr 30 00:39:26.924114 containerd[1816]: time="2025-04-30T00:39:26.924080265Z" level=info msg="Stop container \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\" with signal terminated" Apr 30 00:39:27.118627 containerd[1816]: time="2025-04-30T00:39:27.118582639Z" level=info msg="StopContainer for \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\" with timeout 30 (s)" Apr 30 00:39:27.119168 containerd[1816]: time="2025-04-30T00:39:27.118961398Z" level=info msg="Stop container \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\" with signal terminated" Apr 30 00:39:27.168257 containerd[1816]: time="2025-04-30T00:39:27.168212049Z" level=info msg="StopContainer for \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\" with timeout 5 (s)" Apr 30 00:39:27.169123 containerd[1816]: time="2025-04-30T00:39:27.169102366Z" level=info msg="Stop container \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\" with signal terminated" Apr 30 00:39:27.203134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f-rootfs.mount: Deactivated successfully. Apr 30 00:39:27.254157 containerd[1816]: time="2025-04-30T00:39:27.254076950Z" level=info msg="shim disconnected" id=f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1 namespace=k8s.io Apr 30 00:39:27.254551 containerd[1816]: time="2025-04-30T00:39:27.254194750Z" level=warning msg="cleaning up after shim disconnected" id=f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1 namespace=k8s.io Apr 30 00:39:27.254551 containerd[1816]: time="2025-04-30T00:39:27.254206550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:27.254402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1-rootfs.mount: Deactivated successfully. Apr 30 00:39:27.273173 containerd[1816]: time="2025-04-30T00:39:27.272734014Z" level=info msg="shim disconnected" id=bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f namespace=k8s.io Apr 30 00:39:27.273173 containerd[1816]: time="2025-04-30T00:39:27.272981893Z" level=warning msg="cleaning up after shim disconnected" id=bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f namespace=k8s.io Apr 30 00:39:27.273173 containerd[1816]: time="2025-04-30T00:39:27.272997773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:27.296301 containerd[1816]: time="2025-04-30T00:39:27.296257663Z" level=info msg="StopContainer for \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\" returns successfully" Apr 30 00:39:27.296921 containerd[1816]: time="2025-04-30T00:39:27.296895861Z" level=info msg="StopPodSandbox for \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\"" Apr 30 00:39:27.296985 containerd[1816]: time="2025-04-30T00:39:27.296932781Z" level=info msg="Container to stop \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:39:27.296985 containerd[1816]: time="2025-04-30T00:39:27.296944261Z" level=info msg="Container to stop \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:39:27.296985 containerd[1816]: time="2025-04-30T00:39:27.296955461Z" level=info msg="Container to stop \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:39:27.301896 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b-shm.mount: Deactivated successfully. Apr 30 00:39:27.307731 containerd[1816]: time="2025-04-30T00:39:27.306902391Z" level=info msg="StopContainer for \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\" returns successfully" Apr 30 00:39:27.310147 containerd[1816]: time="2025-04-30T00:39:27.310099981Z" level=info msg="StopPodSandbox for \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\"" Apr 30 00:39:27.310236 containerd[1816]: time="2025-04-30T00:39:27.310156701Z" level=info msg="Container to stop \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:39:27.315865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d-shm.mount: Deactivated successfully. Apr 30 00:39:27.353594 containerd[1816]: time="2025-04-30T00:39:27.353531410Z" level=info msg="shim disconnected" id=fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b namespace=k8s.io Apr 30 00:39:27.353594 containerd[1816]: time="2025-04-30T00:39:27.353584210Z" level=warning msg="cleaning up after shim disconnected" id=fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b namespace=k8s.io Apr 30 00:39:27.353594 containerd[1816]: time="2025-04-30T00:39:27.353594530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:27.374987 containerd[1816]: time="2025-04-30T00:39:27.374928865Z" level=info msg="shim disconnected" id=d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d namespace=k8s.io Apr 30 00:39:27.374987 containerd[1816]: time="2025-04-30T00:39:27.374980545Z" level=warning msg="cleaning up after shim disconnected" id=d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d namespace=k8s.io Apr 30 00:39:27.377324 containerd[1816]: time="2025-04-30T00:39:27.374989025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:27.394727 containerd[1816]: time="2025-04-30T00:39:27.394685166Z" level=info msg="TearDown network for sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" successfully" Apr 30 00:39:27.394727 containerd[1816]: time="2025-04-30T00:39:27.394720446Z" level=info msg="StopPodSandbox for \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" returns successfully" Apr 30 00:39:27.435994 kubelet[3367]: I0430 00:39:27.435628 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-xtables-lock\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.435994 kubelet[3367]: I0430 00:39:27.435667 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-lib-modules\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.435994 kubelet[3367]: I0430 00:39:27.435682 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-net-dir\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.435994 kubelet[3367]: I0430 00:39:27.435706 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/facb84c5-8830-48c5-8675-03a9219d0eb7-node-certs\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.435994 kubelet[3367]: I0430 00:39:27.435724 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-lib-calico\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.435994 kubelet[3367]: I0430 00:39:27.435737 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-policysync\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.436272 kubelet[3367]: I0430 00:39:27.435749 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-log-dir\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.436272 kubelet[3367]: I0430 00:39:27.435765 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-flexvol-driver-host\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.436272 kubelet[3367]: I0430 00:39:27.435782 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-run-calico\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.436272 kubelet[3367]: I0430 00:39:27.435799 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsg5f\" (UniqueName: \"kubernetes.io/projected/facb84c5-8830-48c5-8675-03a9219d0eb7-kube-api-access-nsg5f\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.436272 kubelet[3367]: I0430 00:39:27.435817 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-bin-dir\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.436272 kubelet[3367]: I0430 00:39:27.435836 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/facb84c5-8830-48c5-8675-03a9219d0eb7-tigera-ca-bundle\") pod \"facb84c5-8830-48c5-8675-03a9219d0eb7\" (UID: \"facb84c5-8830-48c5-8675-03a9219d0eb7\") " Apr 30 00:39:27.436899 kubelet[3367]: I0430 00:39:27.436537 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-policysync" (OuterVolumeSpecName: "policysync") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.436899 kubelet[3367]: I0430 00:39:27.436594 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.436899 kubelet[3367]: I0430 00:39:27.436611 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.436899 kubelet[3367]: I0430 00:39:27.436627 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.438616 kubelet[3367]: I0430 00:39:27.438595 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.440350 kubelet[3367]: I0430 00:39:27.439914 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.440350 kubelet[3367]: I0430 00:39:27.439934 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.440350 kubelet[3367]: I0430 00:39:27.439947 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.447789 kubelet[3367]: I0430 00:39:27.447689 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:39:27.455510 kubelet[3367]: I0430 00:39:27.453273 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/facb84c5-8830-48c5-8675-03a9219d0eb7-node-certs" (OuterVolumeSpecName: "node-certs") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:39:27.455510 kubelet[3367]: I0430 00:39:27.453353 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/facb84c5-8830-48c5-8675-03a9219d0eb7-kube-api-access-nsg5f" (OuterVolumeSpecName: "kube-api-access-nsg5f") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "kube-api-access-nsg5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:39:27.465956 kubelet[3367]: I0430 00:39:27.465905 3367 topology_manager.go:215] "Topology Admit Handler" podUID="3619d14f-5a71-42f8-b6ef-2a38cd93ede8" podNamespace="calico-system" podName="calico-node-gttvr" Apr 30 00:39:27.471098 kubelet[3367]: E0430 00:39:27.470064 3367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8524bcc8-ef18-4787-b5da-973e0f7abb0b" containerName="calico-apiserver" Apr 30 00:39:27.471098 kubelet[3367]: E0430 00:39:27.470105 3367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="facb84c5-8830-48c5-8675-03a9219d0eb7" containerName="flexvol-driver" Apr 30 00:39:27.471098 kubelet[3367]: E0430 00:39:27.470113 3367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="facb84c5-8830-48c5-8675-03a9219d0eb7" containerName="install-cni" Apr 30 00:39:27.471098 kubelet[3367]: E0430 00:39:27.470119 3367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="facb84c5-8830-48c5-8675-03a9219d0eb7" containerName="calico-node" Apr 30 00:39:27.471098 kubelet[3367]: I0430 00:39:27.470169 3367 memory_manager.go:354] "RemoveStaleState removing state" podUID="facb84c5-8830-48c5-8675-03a9219d0eb7" containerName="calico-node" Apr 30 00:39:27.471098 kubelet[3367]: I0430 00:39:27.470177 3367 memory_manager.go:354] "RemoveStaleState removing state" podUID="8524bcc8-ef18-4787-b5da-973e0f7abb0b" containerName="calico-apiserver" Apr 30 00:39:27.475409 kubelet[3367]: I0430 00:39:27.474757 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/facb84c5-8830-48c5-8675-03a9219d0eb7-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "facb84c5-8830-48c5-8675-03a9219d0eb7" (UID: "facb84c5-8830-48c5-8675-03a9219d0eb7"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:39:27.524907 systemd-networkd[1368]: cali2930b9e6b02: Link DOWN Apr 30 00:39:27.524918 systemd-networkd[1368]: cali2930b9e6b02: Lost carrier Apr 30 00:39:27.536816 kubelet[3367]: I0430 00:39:27.536718 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-xtables-lock\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.536816 kubelet[3367]: I0430 00:39:27.536759 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-tigera-ca-bundle\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.536816 kubelet[3367]: I0430 00:39:27.536777 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-var-lib-calico\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.536816 kubelet[3367]: I0430 00:39:27.536793 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-cni-net-dir\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.536816 kubelet[3367]: I0430 00:39:27.536812 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsblh\" (UniqueName: \"kubernetes.io/projected/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-kube-api-access-vsblh\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537040 kubelet[3367]: I0430 00:39:27.536828 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-cni-log-dir\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537040 kubelet[3367]: I0430 00:39:27.536845 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-node-certs\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537040 kubelet[3367]: I0430 00:39:27.536861 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-cni-bin-dir\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537040 kubelet[3367]: I0430 00:39:27.536882 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-flexvol-driver-host\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537040 kubelet[3367]: I0430 00:39:27.536899 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-var-run-calico\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537149 kubelet[3367]: I0430 00:39:27.536917 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-lib-modules\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537149 kubelet[3367]: I0430 00:39:27.536930 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3619d14f-5a71-42f8-b6ef-2a38cd93ede8-policysync\") pod \"calico-node-gttvr\" (UID: \"3619d14f-5a71-42f8-b6ef-2a38cd93ede8\") " pod="calico-system/calico-node-gttvr" Apr 30 00:39:27.537149 kubelet[3367]: I0430 00:39:27.536953 3367 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-bin-dir\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537149 kubelet[3367]: I0430 00:39:27.536961 3367 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/facb84c5-8830-48c5-8675-03a9219d0eb7-tigera-ca-bundle\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537149 kubelet[3367]: I0430 00:39:27.536969 3367 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-lib-modules\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537149 kubelet[3367]: I0430 00:39:27.536977 3367 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-net-dir\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537149 kubelet[3367]: I0430 00:39:27.536985 3367 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-xtables-lock\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537294 kubelet[3367]: I0430 00:39:27.536992 3367 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/facb84c5-8830-48c5-8675-03a9219d0eb7-node-certs\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537294 kubelet[3367]: I0430 00:39:27.537000 3367 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-policysync\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537294 kubelet[3367]: I0430 00:39:27.537007 3367 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-cni-log-dir\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537294 kubelet[3367]: I0430 00:39:27.537015 3367 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-lib-calico\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537294 kubelet[3367]: I0430 00:39:27.537023 3367 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-flexvol-driver-host\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537294 kubelet[3367]: I0430 00:39:27.537034 3367 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/facb84c5-8830-48c5-8675-03a9219d0eb7-var-run-calico\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.537294 kubelet[3367]: I0430 00:39:27.537042 3367 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nsg5f\" (UniqueName: \"kubernetes.io/projected/facb84c5-8830-48c5-8675-03a9219d0eb7-kube-api-access-nsg5f\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.522 [INFO][6538] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.523 [INFO][6538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" iface="eth0" netns="/var/run/netns/cni-60684345-10ee-08e4-6e80-882473b753e7" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.524 [INFO][6538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" iface="eth0" netns="/var/run/netns/cni-60684345-10ee-08e4-6e80-882473b753e7" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.534 [INFO][6538] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" after=11.222166ms iface="eth0" netns="/var/run/netns/cni-60684345-10ee-08e4-6e80-882473b753e7" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.534 [INFO][6538] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.534 [INFO][6538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.557 [INFO][6548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.557 [INFO][6548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.557 [INFO][6548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.615 [INFO][6548] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.615 [INFO][6548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.616 [INFO][6548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:27.619482 containerd[1816]: 2025-04-30 00:39:27.618 [INFO][6538] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:39:27.620567 containerd[1816]: time="2025-04-30T00:39:27.620468965Z" level=info msg="TearDown network for sandbox \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" successfully" Apr 30 00:39:27.620567 containerd[1816]: time="2025-04-30T00:39:27.620503245Z" level=info msg="StopPodSandbox for \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" returns successfully" Apr 30 00:39:27.637948 kubelet[3367]: I0430 00:39:27.637902 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6faae125-ceaa-469a-865a-339ba3fb0fe3-tigera-ca-bundle\") pod \"6faae125-ceaa-469a-865a-339ba3fb0fe3\" (UID: \"6faae125-ceaa-469a-865a-339ba3fb0fe3\") " Apr 30 00:39:27.637948 kubelet[3367]: I0430 00:39:27.637941 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w788s\" (UniqueName: \"kubernetes.io/projected/6faae125-ceaa-469a-865a-339ba3fb0fe3-kube-api-access-w788s\") pod \"6faae125-ceaa-469a-865a-339ba3fb0fe3\" (UID: \"6faae125-ceaa-469a-865a-339ba3fb0fe3\") " Apr 30 00:39:27.647951 kubelet[3367]: I0430 00:39:27.646998 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6faae125-ceaa-469a-865a-339ba3fb0fe3-kube-api-access-w788s" (OuterVolumeSpecName: "kube-api-access-w788s") pod "6faae125-ceaa-469a-865a-339ba3fb0fe3" (UID: "6faae125-ceaa-469a-865a-339ba3fb0fe3"). InnerVolumeSpecName "kube-api-access-w788s". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:39:27.651855 kubelet[3367]: I0430 00:39:27.651820 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6faae125-ceaa-469a-865a-339ba3fb0fe3-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6faae125-ceaa-469a-865a-339ba3fb0fe3" (UID: "6faae125-ceaa-469a-865a-339ba3fb0fe3"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:39:27.739388 kubelet[3367]: I0430 00:39:27.739321 3367 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6faae125-ceaa-469a-865a-339ba3fb0fe3-tigera-ca-bundle\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.739388 kubelet[3367]: I0430 00:39:27.739354 3367 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w788s\" (UniqueName: \"kubernetes.io/projected/6faae125-ceaa-469a-865a-339ba3fb0fe3-kube-api-access-w788s\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:27.775489 containerd[1816]: time="2025-04-30T00:39:27.774936139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gttvr,Uid:3619d14f-5a71-42f8-b6ef-2a38cd93ede8,Namespace:calico-system,Attempt:0,}" Apr 30 00:39:27.841680 containerd[1816]: time="2025-04-30T00:39:27.840583781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:27.841680 containerd[1816]: time="2025-04-30T00:39:27.840656141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:27.841680 containerd[1816]: time="2025-04-30T00:39:27.840667541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:27.841680 containerd[1816]: time="2025-04-30T00:39:27.840825140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:27.905112 containerd[1816]: time="2025-04-30T00:39:27.905062027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gttvr,Uid:3619d14f-5a71-42f8-b6ef-2a38cd93ede8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9924395893bbf74f426cf59b6f0e2dff7575a8449d279a8d12ce4a656bd6ec2b\"" Apr 30 00:39:27.912526 containerd[1816]: time="2025-04-30T00:39:27.910971809Z" level=info msg="CreateContainer within sandbox \"9924395893bbf74f426cf59b6f0e2dff7575a8449d279a8d12ce4a656bd6ec2b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:39:27.952355 kubelet[3367]: I0430 00:39:27.952227 3367 scope.go:117] "RemoveContainer" containerID="bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f" Apr 30 00:39:27.962400 containerd[1816]: time="2025-04-30T00:39:27.962356894Z" level=info msg="RemoveContainer for \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\"" Apr 30 00:39:27.979183 containerd[1816]: time="2025-04-30T00:39:27.978903244Z" level=info msg="StopContainer for \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\" with timeout 30 (s)" Apr 30 00:39:27.980701 containerd[1816]: time="2025-04-30T00:39:27.980670598Z" level=info msg="Stop container \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\" with signal terminated" Apr 30 00:39:28.036329 systemd[1]: var-lib-kubelet-pods-6faae125\x2dceaa\x2d469a\x2d865a\x2d339ba3fb0fe3-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Apr 30 00:39:28.037702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d-rootfs.mount: Deactivated successfully. Apr 30 00:39:28.037795 systemd[1]: run-netns-cni\x2d60684345\x2d10ee\x2d08e4\x2d6e80\x2d882473b753e7.mount: Deactivated successfully. Apr 30 00:39:28.037895 systemd[1]: var-lib-kubelet-pods-facb84c5\x2d8830\x2d48c5\x2d8675\x2d03a9219d0eb7-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 00:39:28.037982 systemd[1]: var-lib-kubelet-pods-6faae125\x2dceaa\x2d469a\x2d865a\x2d339ba3fb0fe3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw788s.mount: Deactivated successfully. Apr 30 00:39:28.038063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b-rootfs.mount: Deactivated successfully. Apr 30 00:39:28.038138 systemd[1]: var-lib-kubelet-pods-facb84c5\x2d8830\x2d48c5\x2d8675\x2d03a9219d0eb7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsg5f.mount: Deactivated successfully. Apr 30 00:39:28.038214 systemd[1]: var-lib-kubelet-pods-facb84c5\x2d8830\x2d48c5\x2d8675\x2d03a9219d0eb7-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 00:39:28.066432 containerd[1816]: time="2025-04-30T00:39:28.065508903Z" level=info msg="CreateContainer within sandbox \"9924395893bbf74f426cf59b6f0e2dff7575a8449d279a8d12ce4a656bd6ec2b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"556d743a1fce9b661cba79896c0e05690e71f28bdb9f2c052d485eb3c5460c9e\"" Apr 30 00:39:28.084418 containerd[1816]: time="2025-04-30T00:39:28.082653691Z" level=info msg="RemoveContainer for \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\" returns successfully" Apr 30 00:39:28.084629 containerd[1816]: time="2025-04-30T00:39:28.084605925Z" level=info msg="StartContainer for \"556d743a1fce9b661cba79896c0e05690e71f28bdb9f2c052d485eb3c5460c9e\"" Apr 30 00:39:28.088153 kubelet[3367]: I0430 00:39:28.088119 3367 scope.go:117] "RemoveContainer" containerID="bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f" Apr 30 00:39:28.091257 containerd[1816]: time="2025-04-30T00:39:28.090271868Z" level=error msg="ContainerStatus for \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\": not found" Apr 30 00:39:28.095515 kubelet[3367]: E0430 00:39:28.095472 3367 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\": not found" containerID="bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f" Apr 30 00:39:28.095978 kubelet[3367]: I0430 00:39:28.095534 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f"} err="failed to get container status \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb50941ac73956e77b2ba01d9c0f043d3ab38d6bfa8a5279828be1f7c90ad49f\": not found" Apr 30 00:39:28.095978 kubelet[3367]: I0430 00:39:28.095560 3367 scope.go:117] "RemoveContainer" containerID="f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1" Apr 30 00:39:28.114846 containerd[1816]: time="2025-04-30T00:39:28.114396555Z" level=info msg="RemoveContainer for \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\"" Apr 30 00:39:28.128541 containerd[1816]: time="2025-04-30T00:39:28.128433713Z" level=info msg="RemoveContainer for \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\" returns successfully" Apr 30 00:39:28.129712 kubelet[3367]: I0430 00:39:28.128924 3367 scope.go:117] "RemoveContainer" containerID="a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae" Apr 30 00:39:28.132405 containerd[1816]: time="2025-04-30T00:39:28.130747866Z" level=info msg="RemoveContainer for \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\"" Apr 30 00:39:28.141675 containerd[1816]: time="2025-04-30T00:39:28.141632993Z" level=info msg="RemoveContainer for \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\" returns successfully" Apr 30 00:39:28.142633 kubelet[3367]: I0430 00:39:28.142598 3367 scope.go:117] "RemoveContainer" containerID="9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1" Apr 30 00:39:28.159113 containerd[1816]: time="2025-04-30T00:39:28.159010701Z" level=info msg="RemoveContainer for \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\"" Apr 30 00:39:28.176641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f-rootfs.mount: Deactivated successfully. Apr 30 00:39:28.186716 containerd[1816]: time="2025-04-30T00:39:28.186636897Z" level=info msg="RemoveContainer for \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\" returns successfully" Apr 30 00:39:28.188473 kubelet[3367]: I0430 00:39:28.187808 3367 scope.go:117] "RemoveContainer" containerID="f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1" Apr 30 00:39:28.189136 containerd[1816]: time="2025-04-30T00:39:28.188232252Z" level=error msg="ContainerStatus for \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\": not found" Apr 30 00:39:28.189453 kubelet[3367]: E0430 00:39:28.188831 3367 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\": not found" containerID="f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1" Apr 30 00:39:28.189453 kubelet[3367]: I0430 00:39:28.189025 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1"} err="failed to get container status \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1e4170f527b1e2fa33e07f6be8cc83571c1f6ff3afdd771a4399d29cac77db1\": not found" Apr 30 00:39:28.189453 kubelet[3367]: I0430 00:39:28.189051 3367 scope.go:117] "RemoveContainer" containerID="a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae" Apr 30 00:39:28.190983 containerd[1816]: time="2025-04-30T00:39:28.190650005Z" level=error msg="ContainerStatus for \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\": not found" Apr 30 00:39:28.191077 kubelet[3367]: E0430 00:39:28.190874 3367 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\": not found" containerID="a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae" Apr 30 00:39:28.191077 kubelet[3367]: I0430 00:39:28.190905 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae"} err="failed to get container status \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8d038527f718abdb38fc645a3c6be048fe2f8338c7e8f284a4b1bb2e00a75ae\": not found" Apr 30 00:39:28.191077 kubelet[3367]: I0430 00:39:28.190923 3367 scope.go:117] "RemoveContainer" containerID="9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1" Apr 30 00:39:28.196705 containerd[1816]: time="2025-04-30T00:39:28.195593950Z" level=error msg="ContainerStatus for \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\": not found" Apr 30 00:39:28.196812 kubelet[3367]: E0430 00:39:28.195741 3367 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\": not found" containerID="9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1" Apr 30 00:39:28.196812 kubelet[3367]: I0430 00:39:28.195802 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1"} err="failed to get container status \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"9821e1b573aca504b6bf158ee40a664018f9dc6ca72cc11359b588534e7e99b1\": not found" Apr 30 00:39:28.230245 containerd[1816]: time="2025-04-30T00:39:28.230191606Z" level=info msg="StartContainer for \"556d743a1fce9b661cba79896c0e05690e71f28bdb9f2c052d485eb3c5460c9e\" returns successfully" Apr 30 00:39:28.233079 containerd[1816]: time="2025-04-30T00:39:28.233019837Z" level=info msg="shim disconnected" id=baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f namespace=k8s.io Apr 30 00:39:28.233079 containerd[1816]: time="2025-04-30T00:39:28.233076157Z" level=warning msg="cleaning up after shim disconnected" id=baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f namespace=k8s.io Apr 30 00:39:28.233192 containerd[1816]: time="2025-04-30T00:39:28.233085357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:28.279865 containerd[1816]: time="2025-04-30T00:39:28.279468857Z" level=info msg="StopContainer for \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\" returns successfully" Apr 30 00:39:28.281883 containerd[1816]: time="2025-04-30T00:39:28.281221172Z" level=info msg="StopPodSandbox for \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\"" Apr 30 00:39:28.281883 containerd[1816]: time="2025-04-30T00:39:28.281259972Z" level=info msg="Container to stop \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:39:28.325557 containerd[1816]: time="2025-04-30T00:39:28.325205119Z" level=info msg="shim disconnected" id=556d743a1fce9b661cba79896c0e05690e71f28bdb9f2c052d485eb3c5460c9e namespace=k8s.io Apr 30 00:39:28.325557 containerd[1816]: time="2025-04-30T00:39:28.325261279Z" level=warning msg="cleaning up after shim disconnected" id=556d743a1fce9b661cba79896c0e05690e71f28bdb9f2c052d485eb3c5460c9e namespace=k8s.io Apr 30 00:39:28.325557 containerd[1816]: time="2025-04-30T00:39:28.325272359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:28.352443 containerd[1816]: time="2025-04-30T00:39:28.352269078Z" level=info msg="shim disconnected" id=1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644 namespace=k8s.io Apr 30 00:39:28.352443 containerd[1816]: time="2025-04-30T00:39:28.352342037Z" level=warning msg="cleaning up after shim disconnected" id=1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644 namespace=k8s.io Apr 30 00:39:28.352443 containerd[1816]: time="2025-04-30T00:39:28.352407197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:28.381967 containerd[1816]: time="2025-04-30T00:39:28.381625909Z" level=info msg="shim disconnected" id=0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643 namespace=k8s.io Apr 30 00:39:28.381967 containerd[1816]: time="2025-04-30T00:39:28.381676869Z" level=warning msg="cleaning up after shim disconnected" id=0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643 namespace=k8s.io Apr 30 00:39:28.381967 containerd[1816]: time="2025-04-30T00:39:28.381684989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:28.404832 containerd[1816]: time="2025-04-30T00:39:28.404550800Z" level=info msg="StopContainer for \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\" returns successfully" Apr 30 00:39:28.405946 containerd[1816]: time="2025-04-30T00:39:28.405683597Z" level=info msg="StopPodSandbox for \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\"" Apr 30 00:39:28.405946 containerd[1816]: time="2025-04-30T00:39:28.405741876Z" level=info msg="Container to stop \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:39:28.458983 containerd[1816]: time="2025-04-30T00:39:28.458572477Z" level=info msg="shim disconnected" id=ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470 namespace=k8s.io Apr 30 00:39:28.458983 containerd[1816]: time="2025-04-30T00:39:28.458782116Z" level=warning msg="cleaning up after shim disconnected" id=ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470 namespace=k8s.io Apr 30 00:39:28.459322 containerd[1816]: time="2025-04-30T00:39:28.458792276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:28.493069 containerd[1816]: time="2025-04-30T00:39:28.492939053Z" level=info msg="TearDown network for sandbox \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" successfully" Apr 30 00:39:28.493069 containerd[1816]: time="2025-04-30T00:39:28.492973013Z" level=info msg="StopPodSandbox for \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" returns successfully" Apr 30 00:39:28.495860 systemd-networkd[1368]: cali4e73823ca6a: Link DOWN Apr 30 00:39:28.495870 systemd-networkd[1368]: cali4e73823ca6a: Lost carrier Apr 30 00:39:28.558812 kubelet[3367]: I0430 00:39:28.558777 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7fc6476-2f37-469e-993e-94b38b9dd4ef-tigera-ca-bundle\") pod \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\" (UID: \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\") " Apr 30 00:39:28.558812 kubelet[3367]: I0430 00:39:28.558819 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e7fc6476-2f37-469e-993e-94b38b9dd4ef-typha-certs\") pod \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\" (UID: \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\") " Apr 30 00:39:28.558989 kubelet[3367]: I0430 00:39:28.558840 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xljj2\" (UniqueName: \"kubernetes.io/projected/e7fc6476-2f37-469e-993e-94b38b9dd4ef-kube-api-access-xljj2\") pod \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\" (UID: \"e7fc6476-2f37-469e-993e-94b38b9dd4ef\") " Apr 30 00:39:28.564112 kubelet[3367]: I0430 00:39:28.563531 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fc6476-2f37-469e-993e-94b38b9dd4ef-kube-api-access-xljj2" (OuterVolumeSpecName: "kube-api-access-xljj2") pod "e7fc6476-2f37-469e-993e-94b38b9dd4ef" (UID: "e7fc6476-2f37-469e-993e-94b38b9dd4ef"). InnerVolumeSpecName "kube-api-access-xljj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:39:28.565991 kubelet[3367]: I0430 00:39:28.565947 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7fc6476-2f37-469e-993e-94b38b9dd4ef-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "e7fc6476-2f37-469e-993e-94b38b9dd4ef" (UID: "e7fc6476-2f37-469e-993e-94b38b9dd4ef"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:39:28.569265 kubelet[3367]: I0430 00:39:28.569146 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7fc6476-2f37-469e-993e-94b38b9dd4ef-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "e7fc6476-2f37-469e-993e-94b38b9dd4ef" (UID: "e7fc6476-2f37-469e-993e-94b38b9dd4ef"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:39:28.581356 kubelet[3367]: I0430 00:39:28.581170 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6faae125-ceaa-469a-865a-339ba3fb0fe3" path="/var/lib/kubelet/pods/6faae125-ceaa-469a-865a-339ba3fb0fe3/volumes" Apr 30 00:39:28.581936 kubelet[3367]: I0430 00:39:28.581919 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="facb84c5-8830-48c5-8675-03a9219d0eb7" path="/var/lib/kubelet/pods/facb84c5-8830-48c5-8675-03a9219d0eb7/volumes" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.493 [INFO][6767] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.493 [INFO][6767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" iface="eth0" netns="/var/run/netns/cni-9708c8cb-5bc4-d066-0374-eb40585305ee" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.494 [INFO][6767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" iface="eth0" netns="/var/run/netns/cni-9708c8cb-5bc4-d066-0374-eb40585305ee" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.504 [INFO][6767] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" after=10.677648ms iface="eth0" netns="/var/run/netns/cni-9708c8cb-5bc4-d066-0374-eb40585305ee" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.504 [INFO][6767] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.504 [INFO][6767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.531 [INFO][6811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.532 [INFO][6811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.532 [INFO][6811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.575 [INFO][6811] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.575 [INFO][6811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.576 [INFO][6811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:28.586134 containerd[1816]: 2025-04-30 00:39:28.582 [INFO][6767] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:39:28.587598 containerd[1816]: time="2025-04-30T00:39:28.586581611Z" level=info msg="TearDown network for sandbox \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" successfully" Apr 30 00:39:28.587598 containerd[1816]: time="2025-04-30T00:39:28.586803330Z" level=info msg="StopPodSandbox for \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" returns successfully" Apr 30 00:39:28.589129 containerd[1816]: time="2025-04-30T00:39:28.588886044Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" Apr 30 00:39:28.660223 kubelet[3367]: I0430 00:39:28.659551 3367 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7fc6476-2f37-469e-993e-94b38b9dd4ef-tigera-ca-bundle\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:28.660223 kubelet[3367]: I0430 00:39:28.659583 3367 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e7fc6476-2f37-469e-993e-94b38b9dd4ef-typha-certs\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:28.660223 kubelet[3367]: I0430 00:39:28.659592 3367 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xljj2\" (UniqueName: \"kubernetes.io/projected/e7fc6476-2f37-469e-993e-94b38b9dd4ef-kube-api-access-xljj2\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.639 [WARNING][6837] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0", GenerateName:"calico-apiserver-7688785779-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a6c46b4-3687-469f-a6e7-86919697db2e", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688785779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643", Pod:"calico-apiserver-7688785779-95fgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e73823ca6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.639 [INFO][6837] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.639 [INFO][6837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" iface="eth0" netns="" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.639 [INFO][6837] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.639 [INFO][6837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.659 [INFO][6844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.659 [INFO][6844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.659 [INFO][6844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.668 [WARNING][6844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.668 [INFO][6844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.670 [INFO][6844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:28.672515 containerd[1816]: 2025-04-30 00:39:28.671 [INFO][6837] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:39:28.672922 containerd[1816]: time="2025-04-30T00:39:28.672565712Z" level=info msg="TearDown network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" successfully" Apr 30 00:39:28.672922 containerd[1816]: time="2025-04-30T00:39:28.672591952Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" returns successfully" Apr 30 00:39:28.760786 kubelet[3367]: I0430 00:39:28.760250 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a6c46b4-3687-469f-a6e7-86919697db2e-calico-apiserver-certs\") pod \"9a6c46b4-3687-469f-a6e7-86919697db2e\" (UID: \"9a6c46b4-3687-469f-a6e7-86919697db2e\") " Apr 30 00:39:28.760786 kubelet[3367]: I0430 00:39:28.760300 3367 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9twfh\" (UniqueName: \"kubernetes.io/projected/9a6c46b4-3687-469f-a6e7-86919697db2e-kube-api-access-9twfh\") pod \"9a6c46b4-3687-469f-a6e7-86919697db2e\" (UID: \"9a6c46b4-3687-469f-a6e7-86919697db2e\") " Apr 30 00:39:28.762714 kubelet[3367]: I0430 00:39:28.762651 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6c46b4-3687-469f-a6e7-86919697db2e-kube-api-access-9twfh" (OuterVolumeSpecName: "kube-api-access-9twfh") pod "9a6c46b4-3687-469f-a6e7-86919697db2e" (UID: "9a6c46b4-3687-469f-a6e7-86919697db2e"). InnerVolumeSpecName "kube-api-access-9twfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:39:28.763073 kubelet[3367]: I0430 00:39:28.763037 3367 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6c46b4-3687-469f-a6e7-86919697db2e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9a6c46b4-3687-469f-a6e7-86919697db2e" (UID: "9a6c46b4-3687-469f-a6e7-86919697db2e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:39:28.860633 kubelet[3367]: I0430 00:39:28.860488 3367 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a6c46b4-3687-469f-a6e7-86919697db2e-calico-apiserver-certs\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:28.860633 kubelet[3367]: I0430 00:39:28.860515 3367 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9twfh\" (UniqueName: \"kubernetes.io/projected/9a6c46b4-3687-469f-a6e7-86919697db2e-kube-api-access-9twfh\") on node \"ci-4081.3.3-a-2b660cb835\" DevicePath \"\"" Apr 30 00:39:28.977877 kubelet[3367]: I0430 00:39:28.977843 3367 scope.go:117] "RemoveContainer" containerID="1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644" Apr 30 00:39:28.980068 containerd[1816]: time="2025-04-30T00:39:28.979844265Z" level=info msg="RemoveContainer for \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\"" Apr 30 00:39:28.987730 containerd[1816]: time="2025-04-30T00:39:28.987477962Z" level=info msg="CreateContainer within sandbox \"9924395893bbf74f426cf59b6f0e2dff7575a8449d279a8d12ce4a656bd6ec2b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:39:28.997803 containerd[1816]: time="2025-04-30T00:39:28.997746811Z" level=info msg="RemoveContainer for \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\" returns successfully" Apr 30 00:39:28.998091 kubelet[3367]: I0430 00:39:28.998052 3367 scope.go:117] "RemoveContainer" containerID="1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644" Apr 30 00:39:28.998748 containerd[1816]: time="2025-04-30T00:39:28.998483129Z" level=error msg="ContainerStatus for \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\": not found" Apr 30 00:39:28.999537 kubelet[3367]: E0430 00:39:28.999405 3367 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\": not found" containerID="1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644" Apr 30 00:39:28.999537 kubelet[3367]: I0430 00:39:28.999429 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644"} err="failed to get container status \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644\": not found" Apr 30 00:39:28.999537 kubelet[3367]: I0430 00:39:28.999445 3367 scope.go:117] "RemoveContainer" containerID="baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f" Apr 30 00:39:29.002421 containerd[1816]: time="2025-04-30T00:39:29.002391677Z" level=info msg="RemoveContainer for \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\"" Apr 30 00:39:29.026286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-556d743a1fce9b661cba79896c0e05690e71f28bdb9f2c052d485eb3c5460c9e-rootfs.mount: Deactivated successfully. Apr 30 00:39:29.029282 containerd[1816]: time="2025-04-30T00:39:29.027761680Z" level=info msg="RemoveContainer for \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\" returns successfully" Apr 30 00:39:29.027730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643-rootfs.mount: Deactivated successfully. Apr 30 00:39:29.027844 systemd[1]: run-netns-cni\x2d9708c8cb\x2d5bc4\x2dd066\x2d0374\x2deb40585305ee.mount: Deactivated successfully. Apr 30 00:39:29.027937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643-shm.mount: Deactivated successfully. Apr 30 00:39:29.028060 systemd[1]: var-lib-kubelet-pods-9a6c46b4\x2d3687\x2d469f\x2da6e7\x2d86919697db2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9twfh.mount: Deactivated successfully. Apr 30 00:39:29.028173 systemd[1]: var-lib-kubelet-pods-9a6c46b4\x2d3687\x2d469f\x2da6e7\x2d86919697db2e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Apr 30 00:39:29.028286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f5a83f9ed857bfd4b0ee890e83f9bffb4b05806ec69f44babc20c82707b1644-rootfs.mount: Deactivated successfully. Apr 30 00:39:29.028397 systemd[1]: var-lib-kubelet-pods-e7fc6476\x2d2f37\x2d469e\x2d993e\x2d94b38b9dd4ef-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Apr 30 00:39:29.028502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470-rootfs.mount: Deactivated successfully. Apr 30 00:39:29.028596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470-shm.mount: Deactivated successfully. Apr 30 00:39:29.028704 systemd[1]: var-lib-kubelet-pods-e7fc6476\x2d2f37\x2d469e\x2d993e\x2d94b38b9dd4ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxljj2.mount: Deactivated successfully. Apr 30 00:39:29.028806 systemd[1]: var-lib-kubelet-pods-e7fc6476\x2d2f37\x2d469e\x2d993e\x2d94b38b9dd4ef-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Apr 30 00:39:29.035539 kubelet[3367]: I0430 00:39:29.033502 3367 scope.go:117] "RemoveContainer" containerID="baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f" Apr 30 00:39:29.035539 kubelet[3367]: E0430 00:39:29.035119 3367 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\": not found" containerID="baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f" Apr 30 00:39:29.035539 kubelet[3367]: I0430 00:39:29.035185 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f"} err="failed to get container status \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\": not found" Apr 30 00:39:29.036548 containerd[1816]: time="2025-04-30T00:39:29.034987538Z" level=error msg="ContainerStatus for \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"baf73907776bfb92a47445e1876181334442ae2b15f03e8ae857da7c8498cd7f\": not found" Apr 30 00:39:29.046349 containerd[1816]: time="2025-04-30T00:39:29.046309144Z" level=info msg="CreateContainer within sandbox \"9924395893bbf74f426cf59b6f0e2dff7575a8449d279a8d12ce4a656bd6ec2b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e340b419eb58fb78f526a825ab7fbfefe49e4eca0639565da52f0d140e8c80ed\"" Apr 30 00:39:29.047261 containerd[1816]: time="2025-04-30T00:39:29.047230622Z" level=info msg="StartContainer for \"e340b419eb58fb78f526a825ab7fbfefe49e4eca0639565da52f0d140e8c80ed\"" Apr 30 00:39:29.114389 containerd[1816]: time="2025-04-30T00:39:29.114228939Z" level=info msg="StartContainer for \"e340b419eb58fb78f526a825ab7fbfefe49e4eca0639565da52f0d140e8c80ed\" returns successfully" Apr 30 00:39:29.667053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e340b419eb58fb78f526a825ab7fbfefe49e4eca0639565da52f0d140e8c80ed-rootfs.mount: Deactivated successfully. Apr 30 00:39:29.709389 containerd[1816]: time="2025-04-30T00:39:29.708884866Z" level=info msg="shim disconnected" id=e340b419eb58fb78f526a825ab7fbfefe49e4eca0639565da52f0d140e8c80ed namespace=k8s.io Apr 30 00:39:29.709389 containerd[1816]: time="2025-04-30T00:39:29.708944346Z" level=warning msg="cleaning up after shim disconnected" id=e340b419eb58fb78f526a825ab7fbfefe49e4eca0639565da52f0d140e8c80ed namespace=k8s.io Apr 30 00:39:29.709389 containerd[1816]: time="2025-04-30T00:39:29.708953426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:29.906597 kubelet[3367]: I0430 00:39:29.904482 3367 topology_manager.go:215] "Topology Admit Handler" podUID="2a137383-c82a-442e-87a7-302844422a09" podNamespace="calico-system" podName="calico-typha-55dc6dd8ff-5nxtf" Apr 30 00:39:29.906597 kubelet[3367]: E0430 00:39:29.904548 3367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6faae125-ceaa-469a-865a-339ba3fb0fe3" containerName="calico-kube-controllers" Apr 30 00:39:29.906597 kubelet[3367]: E0430 00:39:29.904560 3367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a6c46b4-3687-469f-a6e7-86919697db2e" containerName="calico-apiserver" Apr 30 00:39:29.906597 kubelet[3367]: E0430 00:39:29.904567 3367 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7fc6476-2f37-469e-993e-94b38b9dd4ef" containerName="calico-typha" Apr 30 00:39:29.906597 kubelet[3367]: I0430 00:39:29.904594 3367 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6c46b4-3687-469f-a6e7-86919697db2e" containerName="calico-apiserver" Apr 30 00:39:29.906597 kubelet[3367]: I0430 00:39:29.904601 3367 memory_manager.go:354] "RemoveStaleState removing state" podUID="6faae125-ceaa-469a-865a-339ba3fb0fe3" containerName="calico-kube-controllers" Apr 30 00:39:29.906597 kubelet[3367]: I0430 00:39:29.904607 3367 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fc6476-2f37-469e-993e-94b38b9dd4ef" containerName="calico-typha" Apr 30 00:39:29.967745 kubelet[3367]: I0430 00:39:29.967707 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtr4r\" (UniqueName: \"kubernetes.io/projected/2a137383-c82a-442e-87a7-302844422a09-kube-api-access-gtr4r\") pod \"calico-typha-55dc6dd8ff-5nxtf\" (UID: \"2a137383-c82a-442e-87a7-302844422a09\") " pod="calico-system/calico-typha-55dc6dd8ff-5nxtf" Apr 30 00:39:29.968043 kubelet[3367]: I0430 00:39:29.967853 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a137383-c82a-442e-87a7-302844422a09-tigera-ca-bundle\") pod \"calico-typha-55dc6dd8ff-5nxtf\" (UID: \"2a137383-c82a-442e-87a7-302844422a09\") " pod="calico-system/calico-typha-55dc6dd8ff-5nxtf" Apr 30 00:39:29.968043 kubelet[3367]: I0430 00:39:29.967890 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a137383-c82a-442e-87a7-302844422a09-typha-certs\") pod \"calico-typha-55dc6dd8ff-5nxtf\" (UID: \"2a137383-c82a-442e-87a7-302844422a09\") " pod="calico-system/calico-typha-55dc6dd8ff-5nxtf" Apr 30 00:39:30.018160 containerd[1816]: time="2025-04-30T00:39:30.018094453Z" level=info msg="CreateContainer within sandbox \"9924395893bbf74f426cf59b6f0e2dff7575a8449d279a8d12ce4a656bd6ec2b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:39:30.080781 containerd[1816]: time="2025-04-30T00:39:30.080726944Z" level=info msg="CreateContainer within sandbox \"9924395893bbf74f426cf59b6f0e2dff7575a8449d279a8d12ce4a656bd6ec2b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5bb4b4b707baf3be68f9e9ae101f3acb8c6e767d0337ddb69211d6d37e73c4d4\"" Apr 30 00:39:30.083074 containerd[1816]: time="2025-04-30T00:39:30.082116580Z" level=info msg="StartContainer for \"5bb4b4b707baf3be68f9e9ae101f3acb8c6e767d0337ddb69211d6d37e73c4d4\"" Apr 30 00:39:30.148703 containerd[1816]: time="2025-04-30T00:39:30.148657659Z" level=info msg="StartContainer for \"5bb4b4b707baf3be68f9e9ae101f3acb8c6e767d0337ddb69211d6d37e73c4d4\" returns successfully" Apr 30 00:39:30.209202 containerd[1816]: time="2025-04-30T00:39:30.209159877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55dc6dd8ff-5nxtf,Uid:2a137383-c82a-442e-87a7-302844422a09,Namespace:calico-system,Attempt:0,}" Apr 30 00:39:30.291153 kubelet[3367]: I0430 00:39:30.290549 3367 topology_manager.go:215] "Topology Admit Handler" podUID="e98e3503-d00b-4de3-bc55-a8938c47500d" podNamespace="calico-system" podName="calico-kube-controllers-75d56447db-5tqsh" Apr 30 00:39:30.307908 containerd[1816]: time="2025-04-30T00:39:30.307754140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:30.309520 containerd[1816]: time="2025-04-30T00:39:30.308320938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:30.309520 containerd[1816]: time="2025-04-30T00:39:30.308351378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:30.309520 containerd[1816]: time="2025-04-30T00:39:30.309424695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:30.371356 kubelet[3367]: I0430 00:39:30.371315 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98e3503-d00b-4de3-bc55-a8938c47500d-tigera-ca-bundle\") pod \"calico-kube-controllers-75d56447db-5tqsh\" (UID: \"e98e3503-d00b-4de3-bc55-a8938c47500d\") " pod="calico-system/calico-kube-controllers-75d56447db-5tqsh" Apr 30 00:39:30.371356 kubelet[3367]: I0430 00:39:30.371380 3367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54tc\" (UniqueName: \"kubernetes.io/projected/e98e3503-d00b-4de3-bc55-a8938c47500d-kube-api-access-p54tc\") pod \"calico-kube-controllers-75d56447db-5tqsh\" (UID: \"e98e3503-d00b-4de3-bc55-a8938c47500d\") " pod="calico-system/calico-kube-controllers-75d56447db-5tqsh" Apr 30 00:39:30.387769 containerd[1816]: time="2025-04-30T00:39:30.387660299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55dc6dd8ff-5nxtf,Uid:2a137383-c82a-442e-87a7-302844422a09,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb616a1734b1974becdf179a8517cc79e8c4e194505a7b0e694e472422b3f3b1\"" Apr 30 00:39:30.401043 containerd[1816]: time="2025-04-30T00:39:30.400988898Z" level=info msg="CreateContainer within sandbox \"eb616a1734b1974becdf179a8517cc79e8c4e194505a7b0e694e472422b3f3b1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:39:30.445444 containerd[1816]: time="2025-04-30T00:39:30.445387684Z" level=info msg="CreateContainer within sandbox \"eb616a1734b1974becdf179a8517cc79e8c4e194505a7b0e694e472422b3f3b1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6589743d8d85d3a244969a7dcf7f57e7e644f4620f503fc1dc17a29f17ed64a9\"" Apr 30 00:39:30.446001 containerd[1816]: time="2025-04-30T00:39:30.445952283Z" level=info msg="StartContainer for \"6589743d8d85d3a244969a7dcf7f57e7e644f4620f503fc1dc17a29f17ed64a9\"" Apr 30 00:39:30.505212 containerd[1816]: time="2025-04-30T00:39:30.505169144Z" level=info msg="StartContainer for \"6589743d8d85d3a244969a7dcf7f57e7e644f4620f503fc1dc17a29f17ed64a9\" returns successfully" Apr 30 00:39:30.585501 kubelet[3367]: I0430 00:39:30.584506 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6c46b4-3687-469f-a6e7-86919697db2e" path="/var/lib/kubelet/pods/9a6c46b4-3687-469f-a6e7-86919697db2e/volumes" Apr 30 00:39:30.585501 kubelet[3367]: I0430 00:39:30.584895 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fc6476-2f37-469e-993e-94b38b9dd4ef" path="/var/lib/kubelet/pods/e7fc6476-2f37-469e-993e-94b38b9dd4ef/volumes" Apr 30 00:39:30.602727 containerd[1816]: time="2025-04-30T00:39:30.602685770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d56447db-5tqsh,Uid:e98e3503-d00b-4de3-bc55-a8938c47500d,Namespace:calico-system,Attempt:0,}" Apr 30 00:39:30.769738 systemd-networkd[1368]: cali701047a4528: Link UP Apr 30 00:39:30.770503 systemd-networkd[1368]: cali701047a4528: Gained carrier Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.697 [INFO][7064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0 calico-kube-controllers-75d56447db- calico-system e98e3503-d00b-4de3-bc55-a8938c47500d 1111 0 2025-04-30 00:39:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75d56447db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-2b660cb835 calico-kube-controllers-75d56447db-5tqsh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali701047a4528 [] []}} ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.698 [INFO][7064] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.726 [INFO][7075] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" HandleID="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.736 [INFO][7075] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" HandleID="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ec410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-2b660cb835", "pod":"calico-kube-controllers-75d56447db-5tqsh", "timestamp":"2025-04-30 00:39:30.726151438 +0000 UTC"}, Hostname:"ci-4081.3.3-a-2b660cb835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.736 [INFO][7075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.736 [INFO][7075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.736 [INFO][7075] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-2b660cb835' Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.738 [INFO][7075] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.742 [INFO][7075] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.745 [INFO][7075] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.747 [INFO][7075] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.749 [INFO][7075] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.750 [INFO][7075] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.751 [INFO][7075] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1 Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.756 [INFO][7075] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.765 [INFO][7075] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.9/26] block=192.168.1.0/26 handle="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.765 [INFO][7075] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.9/26] handle="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" host="ci-4081.3.3-a-2b660cb835" Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.765 [INFO][7075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:39:30.784847 containerd[1816]: 2025-04-30 00:39:30.765 [INFO][7075] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.9/26] IPv6=[] ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" HandleID="k8s-pod-network.98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" Apr 30 00:39:30.785875 containerd[1816]: 2025-04-30 00:39:30.767 [INFO][7064] cni-plugin/k8s.go 386: Populated endpoint ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0", GenerateName:"calico-kube-controllers-75d56447db-", Namespace:"calico-system", SelfLink:"", UID:"e98e3503-d00b-4de3-bc55-a8938c47500d", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 39, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d56447db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"", Pod:"calico-kube-controllers-75d56447db-5tqsh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali701047a4528", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:30.785875 containerd[1816]: 2025-04-30 00:39:30.767 [INFO][7064] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.9/32] ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" Apr 30 00:39:30.785875 containerd[1816]: 2025-04-30 00:39:30.767 [INFO][7064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali701047a4528 ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" Apr 30 00:39:30.785875 containerd[1816]: 2025-04-30 00:39:30.769 [INFO][7064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" Apr 30 00:39:30.785875 containerd[1816]: 2025-04-30 00:39:30.769 [INFO][7064] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0", GenerateName:"calico-kube-controllers-75d56447db-", Namespace:"calico-system", SelfLink:"", UID:"e98e3503-d00b-4de3-bc55-a8938c47500d", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 39, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d56447db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1", Pod:"calico-kube-controllers-75d56447db-5tqsh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali701047a4528", MAC:"02:46:ea:ce:e2:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:39:30.785875 containerd[1816]: 2025-04-30 00:39:30.781 [INFO][7064] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1" Namespace="calico-system" Pod="calico-kube-controllers-75d56447db-5tqsh" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--75d56447db--5tqsh-eth0" Apr 30 00:39:30.808518 containerd[1816]: time="2025-04-30T00:39:30.807985391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:30.808518 containerd[1816]: time="2025-04-30T00:39:30.808406070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:30.808518 containerd[1816]: time="2025-04-30T00:39:30.808418150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:30.808518 containerd[1816]: time="2025-04-30T00:39:30.808521749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:30.858431 containerd[1816]: time="2025-04-30T00:39:30.858304079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d56447db-5tqsh,Uid:e98e3503-d00b-4de3-bc55-a8938c47500d,Namespace:calico-system,Attempt:0,} returns sandbox id \"98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1\"" Apr 30 00:39:30.866486 containerd[1816]: time="2025-04-30T00:39:30.866412015Z" level=info msg="CreateContainer within sandbox \"98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:39:30.906160 containerd[1816]: time="2025-04-30T00:39:30.906118415Z" level=info msg="CreateContainer within sandbox \"98fc488e3eb51fd1a009d9447a7f0ce2eb8d05079ad97723e71038afedca81f1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7495e452d462f8ec764abf9bc8cbe2a00a918f7c45ae329510d28872faa235a9\"" Apr 30 00:39:30.906762 containerd[1816]: time="2025-04-30T00:39:30.906727133Z" level=info msg="StartContainer for \"7495e452d462f8ec764abf9bc8cbe2a00a918f7c45ae329510d28872faa235a9\"" Apr 30 00:39:30.960550 containerd[1816]: time="2025-04-30T00:39:30.960479491Z" level=info msg="StartContainer for \"7495e452d462f8ec764abf9bc8cbe2a00a918f7c45ae329510d28872faa235a9\" returns successfully" Apr 30 00:39:31.048502 kubelet[3367]: I0430 00:39:31.044700 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gttvr" podStartSLOduration=4.044683757 podStartE2EDuration="4.044683757s" podCreationTimestamp="2025-04-30 00:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:31.025481455 +0000 UTC m=+72.536754412" watchObservedRunningTime="2025-04-30 00:39:31.044683757 +0000 UTC m=+72.555956714" Apr 30 00:39:31.119143 kubelet[3367]: I0430 00:39:31.117692 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75d56447db-5tqsh" podStartSLOduration=3.117674544 podStartE2EDuration="3.117674544s" podCreationTimestamp="2025-04-30 00:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:31.116864466 +0000 UTC m=+72.628137463" watchObservedRunningTime="2025-04-30 00:39:31.117674544 +0000 UTC m=+72.628947501" Apr 30 00:39:31.119143 kubelet[3367]: I0430 00:39:31.118854 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55dc6dd8ff-5nxtf" podStartSLOduration=5.11884342 podStartE2EDuration="5.11884342s" podCreationTimestamp="2025-04-30 00:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:31.045687714 +0000 UTC m=+72.556960711" watchObservedRunningTime="2025-04-30 00:39:31.11884342 +0000 UTC m=+72.630116417" Apr 30 00:39:32.701620 systemd-networkd[1368]: cali701047a4528: Gained IPv6LL Apr 30 00:39:33.052932 systemd[1]: run-containerd-runc-k8s.io-7495e452d462f8ec764abf9bc8cbe2a00a918f7c45ae329510d28872faa235a9-runc.8z54tD.mount: Deactivated successfully. Apr 30 00:40:19.238438 containerd[1816]: time="2025-04-30T00:40:19.238301584Z" level=info msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\"" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.275 [WARNING][7578] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e0ed15b-c5c4-4942-8821-6ea97e00435b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d", Pod:"coredns-7db6d8ff4d-2q77n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6b119671f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.275 [INFO][7578] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.275 [INFO][7578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" iface="eth0" netns="" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.275 [INFO][7578] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.275 [INFO][7578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.294 [INFO][7585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.294 [INFO][7585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.294 [INFO][7585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.303 [WARNING][7585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.303 [INFO][7585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.305 [INFO][7585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.308017 containerd[1816]: 2025-04-30 00:40:19.306 [INFO][7578] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.309041 containerd[1816]: time="2025-04-30T00:40:19.308056691Z" level=info msg="TearDown network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" successfully" Apr 30 00:40:19.309041 containerd[1816]: time="2025-04-30T00:40:19.308080491Z" level=info msg="StopPodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" returns successfully" Apr 30 00:40:19.309041 containerd[1816]: time="2025-04-30T00:40:19.308544410Z" level=info msg="RemovePodSandbox for \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\"" Apr 30 00:40:19.309041 containerd[1816]: time="2025-04-30T00:40:19.308577890Z" level=info msg="Forcibly stopping sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\"" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.342 [WARNING][7603] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2e0ed15b-c5c4-4942-8821-6ea97e00435b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"cb7c664963adc51d55d531c019da1e3bf12f253090dab4bf501bfd219352816d", Pod:"coredns-7db6d8ff4d-2q77n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6b119671f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.342 [INFO][7603] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.342 [INFO][7603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" iface="eth0" netns="" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.342 [INFO][7603] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.342 [INFO][7603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.358 [INFO][7610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.358 [INFO][7610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.358 [INFO][7610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.367 [WARNING][7610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.367 [INFO][7610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" HandleID="k8s-pod-network.be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Workload="ci--4081.3.3--a--2b660cb835-k8s-coredns--7db6d8ff4d--2q77n-eth0" Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.368 [INFO][7610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.371379 containerd[1816]: 2025-04-30 00:40:19.369 [INFO][7603] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260" Apr 30 00:40:19.371936 containerd[1816]: time="2025-04-30T00:40:19.371421858Z" level=info msg="TearDown network for sandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" successfully" Apr 30 00:40:19.385472 containerd[1816]: time="2025-04-30T00:40:19.385430575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:19.385559 containerd[1816]: time="2025-04-30T00:40:19.385496895Z" level=info msg="RemovePodSandbox \"be3db9a1108069bbb8c7bd5318a368a7052b7c5e32f22814089de861f4d15260\" returns successfully" Apr 30 00:40:19.385992 containerd[1816]: time="2025-04-30T00:40:19.385972613Z" level=info msg="StopPodSandbox for \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\"" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.418 [WARNING][7628] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.418 [INFO][7628] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.418 [INFO][7628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" iface="eth0" netns="" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.418 [INFO][7628] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.418 [INFO][7628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.436 [INFO][7635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.437 [INFO][7635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.437 [INFO][7635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.444 [WARNING][7635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.444 [INFO][7635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.446 [INFO][7635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.449592 containerd[1816]: 2025-04-30 00:40:19.447 [INFO][7628] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.449592 containerd[1816]: time="2025-04-30T00:40:19.449585219Z" level=info msg="TearDown network for sandbox \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" successfully" Apr 30 00:40:19.450044 containerd[1816]: time="2025-04-30T00:40:19.449611499Z" level=info msg="StopPodSandbox for \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" returns successfully" Apr 30 00:40:19.450760 containerd[1816]: time="2025-04-30T00:40:19.450460936Z" level=info msg="RemovePodSandbox for \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\"" Apr 30 00:40:19.450760 containerd[1816]: time="2025-04-30T00:40:19.450490736Z" level=info msg="Forcibly stopping sandbox \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\"" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.503 [WARNING][7653] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.503 [INFO][7653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.503 [INFO][7653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" iface="eth0" netns="" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.503 [INFO][7653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.503 [INFO][7653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.521 [INFO][7661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.522 [INFO][7661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.522 [INFO][7661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.531 [WARNING][7661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.531 [INFO][7661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" HandleID="k8s-pod-network.d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--kube--controllers--588cb9568f--qkvlw-eth0" Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.532 [INFO][7661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.536504 containerd[1816]: 2025-04-30 00:40:19.534 [INFO][7653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d" Apr 30 00:40:19.536504 containerd[1816]: time="2025-04-30T00:40:19.536045395Z" level=info msg="TearDown network for sandbox \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" successfully" Apr 30 00:40:19.547246 containerd[1816]: time="2025-04-30T00:40:19.547131281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:19.547246 containerd[1816]: time="2025-04-30T00:40:19.547226400Z" level=info msg="RemovePodSandbox \"d8b19a6f61224fd2009c3ebebd1f4ec1c5dcfd3e60b39228a3e271dfe3ad858d\" returns successfully" Apr 30 00:40:19.547798 containerd[1816]: time="2025-04-30T00:40:19.547769679Z" level=info msg="StopPodSandbox for \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\"" Apr 30 00:40:19.547897 containerd[1816]: time="2025-04-30T00:40:19.547849878Z" level=info msg="TearDown network for sandbox \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" successfully" Apr 30 00:40:19.547897 containerd[1816]: time="2025-04-30T00:40:19.547864438Z" level=info msg="StopPodSandbox for \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" returns successfully" Apr 30 00:40:19.548546 containerd[1816]: time="2025-04-30T00:40:19.548182277Z" level=info msg="RemovePodSandbox for \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\"" Apr 30 00:40:19.548546 containerd[1816]: time="2025-04-30T00:40:19.548207557Z" level=info msg="Forcibly stopping sandbox \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\"" Apr 30 00:40:19.548546 containerd[1816]: time="2025-04-30T00:40:19.548254717Z" level=info msg="TearDown network for sandbox \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" successfully" Apr 30 00:40:19.560129 containerd[1816]: time="2025-04-30T00:40:19.560086561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:19.560228 containerd[1816]: time="2025-04-30T00:40:19.560171281Z" level=info msg="RemovePodSandbox \"ffd8812d5db16ab434d511509365c6cde8b62e720381f4cef187b68490816470\" returns successfully" Apr 30 00:40:19.560922 containerd[1816]: time="2025-04-30T00:40:19.560695359Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.598 [WARNING][7679] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.598 [INFO][7679] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.598 [INFO][7679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" iface="eth0" netns="" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.598 [INFO][7679] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.598 [INFO][7679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.621 [INFO][7686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.622 [INFO][7686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.622 [INFO][7686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.630 [WARNING][7686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.630 [INFO][7686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.632 [INFO][7686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.636068 containerd[1816]: 2025-04-30 00:40:19.634 [INFO][7679] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.636762 containerd[1816]: time="2025-04-30T00:40:19.636108609Z" level=info msg="TearDown network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" successfully" Apr 30 00:40:19.636762 containerd[1816]: time="2025-04-30T00:40:19.636134209Z" level=info msg="StopPodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" returns successfully" Apr 30 00:40:19.636762 containerd[1816]: time="2025-04-30T00:40:19.636682727Z" level=info msg="RemovePodSandbox for \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" Apr 30 00:40:19.636762 containerd[1816]: time="2025-04-30T00:40:19.636713207Z" level=info msg="Forcibly stopping sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\"" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.673 [WARNING][7704] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.673 [INFO][7704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.673 [INFO][7704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" iface="eth0" netns="" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.673 [INFO][7704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.673 [INFO][7704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.694 [INFO][7711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.694 [INFO][7711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.694 [INFO][7711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.703 [WARNING][7711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.703 [INFO][7711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" HandleID="k8s-pod-network.979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.704 [INFO][7711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.708098 containerd[1816]: 2025-04-30 00:40:19.706 [INFO][7704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146" Apr 30 00:40:19.708740 containerd[1816]: time="2025-04-30T00:40:19.708282548Z" level=info msg="TearDown network for sandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" successfully" Apr 30 00:40:19.718225 containerd[1816]: time="2025-04-30T00:40:19.718171918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:19.718392 containerd[1816]: time="2025-04-30T00:40:19.718244558Z" level=info msg="RemovePodSandbox \"979d8139626ba23e7274f2c63a5cd4a77542114ba765bfe52a31660c711db146\" returns successfully" Apr 30 00:40:19.718962 containerd[1816]: time="2025-04-30T00:40:19.718785596Z" level=info msg="StopPodSandbox for \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\"" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.756 [WARNING][7729] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.756 [INFO][7729] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.756 [INFO][7729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" iface="eth0" netns="" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.756 [INFO][7729] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.756 [INFO][7729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.778 [INFO][7736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.778 [INFO][7736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.778 [INFO][7736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.787 [WARNING][7736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.787 [INFO][7736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.789 [INFO][7736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.793253 containerd[1816]: 2025-04-30 00:40:19.791 [INFO][7729] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.793253 containerd[1816]: time="2025-04-30T00:40:19.793218769Z" level=info msg="TearDown network for sandbox \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" successfully" Apr 30 00:40:19.793253 containerd[1816]: time="2025-04-30T00:40:19.793251569Z" level=info msg="StopPodSandbox for \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" returns successfully" Apr 30 00:40:19.794776 containerd[1816]: time="2025-04-30T00:40:19.794561525Z" level=info msg="RemovePodSandbox for \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\"" Apr 30 00:40:19.794776 containerd[1816]: time="2025-04-30T00:40:19.794669084Z" level=info msg="Forcibly stopping sandbox \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\"" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.833 [WARNING][7754] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.833 [INFO][7754] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.833 [INFO][7754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" iface="eth0" netns="" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.833 [INFO][7754] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.833 [INFO][7754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.854 [INFO][7761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.854 [INFO][7761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.854 [INFO][7761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.864 [WARNING][7761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.864 [INFO][7761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" HandleID="k8s-pod-network.0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--95fgn-eth0" Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.865 [INFO][7761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.869597 containerd[1816]: 2025-04-30 00:40:19.867 [INFO][7754] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643" Apr 30 00:40:19.869978 containerd[1816]: time="2025-04-30T00:40:19.869605015Z" level=info msg="TearDown network for sandbox \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" successfully" Apr 30 00:40:19.879185 containerd[1816]: time="2025-04-30T00:40:19.879134106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:19.879374 containerd[1816]: time="2025-04-30T00:40:19.879210146Z" level=info msg="RemovePodSandbox \"0d7f1cd99252e57ad32f0f71ed0488413b0e848f63e9bcb7b4a7331574193643\" returns successfully" Apr 30 00:40:19.879879 containerd[1816]: time="2025-04-30T00:40:19.879700664Z" level=info msg="StopPodSandbox for \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\"" Apr 30 00:40:19.879879 containerd[1816]: time="2025-04-30T00:40:19.879777864Z" level=info msg="TearDown network for sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" successfully" Apr 30 00:40:19.879879 containerd[1816]: time="2025-04-30T00:40:19.879789664Z" level=info msg="StopPodSandbox for \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" returns successfully" Apr 30 00:40:19.881093 containerd[1816]: time="2025-04-30T00:40:19.880144543Z" level=info msg="RemovePodSandbox for \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\"" Apr 30 00:40:19.881093 containerd[1816]: time="2025-04-30T00:40:19.880177463Z" level=info msg="Forcibly stopping sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\"" Apr 30 00:40:19.881093 containerd[1816]: time="2025-04-30T00:40:19.880222503Z" level=info msg="TearDown network for sandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" successfully" Apr 30 00:40:19.889169 containerd[1816]: time="2025-04-30T00:40:19.889129556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:19.889258 containerd[1816]: time="2025-04-30T00:40:19.889200435Z" level=info msg="RemovePodSandbox \"fad51d16b15d6448e7bbc0492bbd9a84fd0be32d7572135f482648584d03bb3b\" returns successfully" Apr 30 00:40:19.889813 containerd[1816]: time="2025-04-30T00:40:19.889704874Z" level=info msg="StopPodSandbox for \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\"" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.927 [WARNING][7780] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.927 [INFO][7780] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.927 [INFO][7780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" iface="eth0" netns="" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.927 [INFO][7780] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.927 [INFO][7780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.946 [INFO][7787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.946 [INFO][7787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.946 [INFO][7787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.954 [WARNING][7787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.954 [INFO][7787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.956 [INFO][7787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:19.960075 containerd[1816]: 2025-04-30 00:40:19.957 [INFO][7780] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:19.960592 containerd[1816]: time="2025-04-30T00:40:19.960125019Z" level=info msg="TearDown network for sandbox \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" successfully" Apr 30 00:40:19.960592 containerd[1816]: time="2025-04-30T00:40:19.960154379Z" level=info msg="StopPodSandbox for \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" returns successfully" Apr 30 00:40:19.960937 containerd[1816]: time="2025-04-30T00:40:19.960905096Z" level=info msg="RemovePodSandbox for \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\"" Apr 30 00:40:19.961000 containerd[1816]: time="2025-04-30T00:40:19.960943536Z" level=info msg="Forcibly stopping sandbox \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\"" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:19.998 [WARNING][7805] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" WorkloadEndpoint="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:19.998 [INFO][7805] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:19.998 [INFO][7805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" iface="eth0" netns="" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:19.998 [INFO][7805] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:19.998 [INFO][7805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:20.017 [INFO][7812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:20.018 [INFO][7812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:20.018 [INFO][7812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:20.026 [WARNING][7812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:20.026 [INFO][7812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" HandleID="k8s-pod-network.1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--7688785779--kmccd-eth0" Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:20.028 [INFO][7812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:20.032199 containerd[1816]: 2025-04-30 00:40:20.030 [INFO][7805] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464" Apr 30 00:40:20.032595 containerd[1816]: time="2025-04-30T00:40:20.032261398Z" level=info msg="TearDown network for sandbox \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" successfully" Apr 30 00:40:20.041574 containerd[1816]: time="2025-04-30T00:40:20.041513530Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:20.041744 containerd[1816]: time="2025-04-30T00:40:20.041598330Z" level=info msg="RemovePodSandbox \"1ee74ded3455f432714bfd5db54fd0fc6bf5307e78cb8d86e489419ab7498464\" returns successfully" Apr 30 00:40:20.042250 containerd[1816]: time="2025-04-30T00:40:20.042039968Z" level=info msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\"" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.079 [WARNING][7831] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0", GenerateName:"calico-apiserver-5db4fc9d49-", Namespace:"calico-apiserver", SelfLink:"", UID:"90111c95-ac27-4e82-acee-f83ab191ee0f", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db4fc9d49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae", Pod:"calico-apiserver-5db4fc9d49-vxdt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a40a0a16a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.079 [INFO][7831] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.079 [INFO][7831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" iface="eth0" netns="" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.079 [INFO][7831] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.079 [INFO][7831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.099 [INFO][7839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.099 [INFO][7839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.099 [INFO][7839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.107 [WARNING][7839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.108 [INFO][7839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.109 [INFO][7839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:20.112244 containerd[1816]: 2025-04-30 00:40:20.110 [INFO][7831] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.112881 containerd[1816]: time="2025-04-30T00:40:20.112227554Z" level=info msg="TearDown network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" successfully" Apr 30 00:40:20.112881 containerd[1816]: time="2025-04-30T00:40:20.112259194Z" level=info msg="StopPodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" returns successfully" Apr 30 00:40:20.113698 containerd[1816]: time="2025-04-30T00:40:20.113674790Z" level=info msg="RemovePodSandbox for \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\"" Apr 30 00:40:20.114236 containerd[1816]: time="2025-04-30T00:40:20.113824949Z" level=info msg="Forcibly stopping sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\"" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.159 [WARNING][7856] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0", GenerateName:"calico-apiserver-5db4fc9d49-", Namespace:"calico-apiserver", SelfLink:"", UID:"90111c95-ac27-4e82-acee-f83ab191ee0f", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db4fc9d49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-2b660cb835", ContainerID:"eca2e65d8739640b51213942d1297f9ac2273fbd9e49eaf366827ea22d17f3ae", Pod:"calico-apiserver-5db4fc9d49-vxdt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a40a0a16a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.159 [INFO][7856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.159 [INFO][7856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" iface="eth0" netns="" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.159 [INFO][7856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.159 [INFO][7856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.178 [INFO][7864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.178 [INFO][7864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.178 [INFO][7864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.186 [WARNING][7864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.186 [INFO][7864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" HandleID="k8s-pod-network.bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Workload="ci--4081.3.3--a--2b660cb835-k8s-calico--apiserver--5db4fc9d49--vxdt7-eth0" Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.188 [INFO][7864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:40:20.191100 containerd[1816]: 2025-04-30 00:40:20.189 [INFO][7856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c" Apr 30 00:40:20.191693 containerd[1816]: time="2025-04-30T00:40:20.191134033Z" level=info msg="TearDown network for sandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" successfully" Apr 30 00:40:20.235187 containerd[1816]: time="2025-04-30T00:40:20.235123178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:40:20.235401 containerd[1816]: time="2025-04-30T00:40:20.235203418Z" level=info msg="RemovePodSandbox \"bd0bc4411850e2f360ce8dccf470270d99523febf5a54d0677e4955f278d9e6c\" returns successfully" Apr 30 00:40:25.193671 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.16.10:43058.service - OpenSSH per-connection server daemon (10.200.16.10:43058). Apr 30 00:40:25.604528 sshd[7873]: Accepted publickey for core from 10.200.16.10 port 43058 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:25.606386 sshd[7873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:25.610848 systemd-logind[1748]: New session 10 of user core. Apr 30 00:40:25.613634 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:40:25.977592 sshd[7873]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:25.980449 systemd[1]: sshd@7-10.200.20.17:22-10.200.16.10:43058.service: Deactivated successfully. Apr 30 00:40:25.984789 systemd-logind[1748]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:40:25.985447 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:40:25.986311 systemd-logind[1748]: Removed session 10. Apr 30 00:40:31.048575 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.16.10:49602.service - OpenSSH per-connection server daemon (10.200.16.10:49602). Apr 30 00:40:31.457732 sshd[7946]: Accepted publickey for core from 10.200.16.10 port 49602 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:31.459085 sshd[7946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:31.463461 systemd-logind[1748]: New session 11 of user core. Apr 30 00:40:31.467787 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:40:31.821977 sshd[7946]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:31.824868 systemd[1]: sshd@8-10.200.20.17:22-10.200.16.10:49602.service: Deactivated successfully. Apr 30 00:40:31.825129 systemd-logind[1748]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:40:31.829732 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:40:31.831089 systemd-logind[1748]: Removed session 11. Apr 30 00:40:36.902604 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.16.10:49608.service - OpenSSH per-connection server daemon (10.200.16.10:49608). Apr 30 00:40:37.314133 sshd[7964]: Accepted publickey for core from 10.200.16.10 port 49608 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:37.315508 sshd[7964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:37.319315 systemd-logind[1748]: New session 12 of user core. Apr 30 00:40:37.324626 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:40:37.677397 sshd[7964]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:37.681519 systemd-logind[1748]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:40:37.682122 systemd[1]: sshd@9-10.200.20.17:22-10.200.16.10:49608.service: Deactivated successfully. Apr 30 00:40:37.684646 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:40:37.685729 systemd-logind[1748]: Removed session 12. Apr 30 00:40:37.764590 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.16.10:49614.service - OpenSSH per-connection server daemon (10.200.16.10:49614). Apr 30 00:40:38.211024 sshd[7979]: Accepted publickey for core from 10.200.16.10 port 49614 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:38.212425 sshd[7979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:38.216864 systemd-logind[1748]: New session 13 of user core. Apr 30 00:40:38.225581 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:40:38.624199 sshd[7979]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:38.628982 systemd[1]: sshd@10-10.200.20.17:22-10.200.16.10:49614.service: Deactivated successfully. Apr 30 00:40:38.631548 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:40:38.632226 systemd-logind[1748]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:40:38.633722 systemd-logind[1748]: Removed session 13. Apr 30 00:40:38.701639 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.16.10:49616.service - OpenSSH per-connection server daemon (10.200.16.10:49616). Apr 30 00:40:39.113619 sshd[7990]: Accepted publickey for core from 10.200.16.10 port 49616 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:39.114918 sshd[7990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:39.118770 systemd-logind[1748]: New session 14 of user core. Apr 30 00:40:39.123667 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:40:39.476059 sshd[7990]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:39.480515 systemd-logind[1748]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:40:39.481125 systemd[1]: sshd@11-10.200.20.17:22-10.200.16.10:49616.service: Deactivated successfully. Apr 30 00:40:39.483528 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:40:39.485132 systemd-logind[1748]: Removed session 14. Apr 30 00:40:44.553627 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.16.10:39510.service - OpenSSH per-connection server daemon (10.200.16.10:39510). Apr 30 00:40:45.001085 sshd[8008]: Accepted publickey for core from 10.200.16.10 port 39510 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:45.002882 sshd[8008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:45.007868 systemd-logind[1748]: New session 15 of user core. Apr 30 00:40:45.012644 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:40:45.383409 sshd[8008]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:45.386638 systemd-logind[1748]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:40:45.388246 systemd[1]: sshd@12-10.200.20.17:22-10.200.16.10:39510.service: Deactivated successfully. Apr 30 00:40:45.391630 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:40:45.393124 systemd-logind[1748]: Removed session 15. Apr 30 00:40:50.459598 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.16.10:47494.service - OpenSSH per-connection server daemon (10.200.16.10:47494). Apr 30 00:40:50.897571 sshd[8022]: Accepted publickey for core from 10.200.16.10 port 47494 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:50.898968 sshd[8022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:50.903003 systemd-logind[1748]: New session 16 of user core. Apr 30 00:40:50.910617 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:40:51.278582 sshd[8022]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:51.281422 systemd-logind[1748]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:40:51.281941 systemd[1]: sshd@13-10.200.20.17:22-10.200.16.10:47494.service: Deactivated successfully. Apr 30 00:40:51.286230 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:40:51.287646 systemd-logind[1748]: Removed session 16. Apr 30 00:40:56.354579 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.16.10:47510.service - OpenSSH per-connection server daemon (10.200.16.10:47510). Apr 30 00:40:56.793022 sshd[8044]: Accepted publickey for core from 10.200.16.10 port 47510 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:56.795482 sshd[8044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:56.800414 systemd-logind[1748]: New session 17 of user core. Apr 30 00:40:56.805602 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:40:57.174575 sshd[8044]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:57.178506 systemd-logind[1748]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:40:57.179071 systemd[1]: sshd@14-10.200.20.17:22-10.200.16.10:47510.service: Deactivated successfully. Apr 30 00:40:57.181845 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:40:57.183654 systemd-logind[1748]: Removed session 17. Apr 30 00:40:57.253803 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.16.10:47512.service - OpenSSH per-connection server daemon (10.200.16.10:47512). Apr 30 00:40:57.694494 sshd[8058]: Accepted publickey for core from 10.200.16.10 port 47512 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:57.695860 sshd[8058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:57.699582 systemd-logind[1748]: New session 18 of user core. Apr 30 00:40:57.703592 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:40:58.170555 sshd[8058]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:58.174272 systemd[1]: sshd@15-10.200.20.17:22-10.200.16.10:47512.service: Deactivated successfully. Apr 30 00:40:58.178404 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:40:58.179688 systemd-logind[1748]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:40:58.181331 systemd-logind[1748]: Removed session 18. Apr 30 00:40:58.252243 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.16.10:47528.service - OpenSSH per-connection server daemon (10.200.16.10:47528). Apr 30 00:40:58.729625 sshd[8092]: Accepted publickey for core from 10.200.16.10 port 47528 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:58.730994 sshd[8092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:58.735246 systemd-logind[1748]: New session 19 of user core. Apr 30 00:40:58.739607 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:41:00.731178 sshd[8092]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:00.735779 systemd-logind[1748]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:41:00.736197 systemd[1]: sshd@16-10.200.20.17:22-10.200.16.10:47528.service: Deactivated successfully. Apr 30 00:41:00.740359 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:41:00.745187 systemd-logind[1748]: Removed session 19. Apr 30 00:41:00.817241 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.16.10:56748.service - OpenSSH per-connection server daemon (10.200.16.10:56748). Apr 30 00:41:01.256666 sshd[8130]: Accepted publickey for core from 10.200.16.10 port 56748 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:01.258378 sshd[8130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:01.262782 systemd-logind[1748]: New session 20 of user core. Apr 30 00:41:01.268652 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:41:01.758344 sshd[8130]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:01.761913 systemd[1]: sshd@17-10.200.20.17:22-10.200.16.10:56748.service: Deactivated successfully. Apr 30 00:41:01.764845 systemd-logind[1748]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:41:01.765241 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:41:01.767092 systemd-logind[1748]: Removed session 20. Apr 30 00:41:01.833612 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.16.10:56764.service - OpenSSH per-connection server daemon (10.200.16.10:56764). Apr 30 00:41:02.245524 sshd[8142]: Accepted publickey for core from 10.200.16.10 port 56764 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:02.246829 sshd[8142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:02.250651 systemd-logind[1748]: New session 21 of user core. Apr 30 00:41:02.262636 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:41:02.603865 sshd[8142]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:02.607240 systemd[1]: sshd@18-10.200.20.17:22-10.200.16.10:56764.service: Deactivated successfully. Apr 30 00:41:02.610136 systemd-logind[1748]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:41:02.610632 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:41:02.611891 systemd-logind[1748]: Removed session 21. Apr 30 00:41:07.683609 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.16.10:56776.service - OpenSSH per-connection server daemon (10.200.16.10:56776). Apr 30 00:41:08.122527 sshd[8178]: Accepted publickey for core from 10.200.16.10 port 56776 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:08.123874 sshd[8178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:08.127737 systemd-logind[1748]: New session 22 of user core. Apr 30 00:41:08.135655 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:41:08.498576 sshd[8178]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:08.502355 systemd[1]: sshd@19-10.200.20.17:22-10.200.16.10:56776.service: Deactivated successfully. Apr 30 00:41:08.505647 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:41:08.506718 systemd-logind[1748]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:41:08.507671 systemd-logind[1748]: Removed session 22. Apr 30 00:41:13.578592 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.16.10:35084.service - OpenSSH per-connection server daemon (10.200.16.10:35084). Apr 30 00:41:14.017099 sshd[8193]: Accepted publickey for core from 10.200.16.10 port 35084 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:14.018473 sshd[8193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:14.022688 systemd-logind[1748]: New session 23 of user core. Apr 30 00:41:14.027584 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:41:14.395573 sshd[8193]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:14.399520 systemd[1]: sshd@20-10.200.20.17:22-10.200.16.10:35084.service: Deactivated successfully. Apr 30 00:41:14.402514 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:41:14.403684 systemd-logind[1748]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:41:14.404997 systemd-logind[1748]: Removed session 23. Apr 30 00:41:19.470609 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.16.10:48432.service - OpenSSH per-connection server daemon (10.200.16.10:48432). Apr 30 00:41:19.877392 sshd[8209]: Accepted publickey for core from 10.200.16.10 port 48432 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:19.878752 sshd[8209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:19.882695 systemd-logind[1748]: New session 24 of user core. Apr 30 00:41:19.887703 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:41:20.233269 sshd[8209]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:20.236983 systemd[1]: sshd@21-10.200.20.17:22-10.200.16.10:48432.service: Deactivated successfully. Apr 30 00:41:20.240042 systemd-logind[1748]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:41:20.241167 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:41:20.243984 systemd-logind[1748]: Removed session 24. Apr 30 00:41:25.309606 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.16.10:48438.service - OpenSSH per-connection server daemon (10.200.16.10:48438). Apr 30 00:41:25.720709 sshd[8223]: Accepted publickey for core from 10.200.16.10 port 48438 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:25.722024 sshd[8223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:25.727354 systemd-logind[1748]: New session 25 of user core. Apr 30 00:41:25.731669 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:41:26.079305 sshd[8223]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:26.083780 systemd[1]: sshd@22-10.200.20.17:22-10.200.16.10:48438.service: Deactivated successfully. Apr 30 00:41:26.087493 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:41:26.088152 systemd-logind[1748]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:41:26.089284 systemd-logind[1748]: Removed session 25. Apr 30 00:41:30.631215 systemd[1]: run-containerd-runc-k8s.io-7495e452d462f8ec764abf9bc8cbe2a00a918f7c45ae329510d28872faa235a9-runc.dvcA6T.mount: Deactivated successfully. Apr 30 00:41:31.156658 systemd[1]: Started sshd@23-10.200.20.17:22-10.200.16.10:52958.service - OpenSSH per-connection server daemon (10.200.16.10:52958). Apr 30 00:41:31.563276 sshd[8298]: Accepted publickey for core from 10.200.16.10 port 52958 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:31.565803 sshd[8298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:31.579669 systemd-logind[1748]: New session 26 of user core. Apr 30 00:41:31.585035 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:41:31.920252 sshd[8298]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:31.922973 systemd[1]: sshd@23-10.200.20.17:22-10.200.16.10:52958.service: Deactivated successfully. Apr 30 00:41:31.927032 systemd-logind[1748]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:41:31.927724 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:41:31.929205 systemd-logind[1748]: Removed session 26. Apr 30 00:41:36.994657 systemd[1]: Started sshd@24-10.200.20.17:22-10.200.16.10:52962.service - OpenSSH per-connection server daemon (10.200.16.10:52962). Apr 30 00:41:37.408170 sshd[8313]: Accepted publickey for core from 10.200.16.10 port 52962 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:37.409932 sshd[8313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:37.414195 systemd-logind[1748]: New session 27 of user core. Apr 30 00:41:37.418640 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:41:37.767246 sshd[8313]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:37.770726 systemd-logind[1748]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:41:37.771551 systemd[1]: sshd@24-10.200.20.17:22-10.200.16.10:52962.service: Deactivated successfully. Apr 30 00:41:37.774194 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:41:37.775812 systemd-logind[1748]: Removed session 27.