Mar 25 01:16:08.349686 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 25 01:16:08.349710 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon Mar 24 23:39:14 -00 2025 Mar 25 01:16:08.349718 kernel: KASLR enabled Mar 25 01:16:08.349723 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 25 01:16:08.349730 kernel: printk: bootconsole [pl11] enabled Mar 25 01:16:08.349735 kernel: efi: EFI v2.7 by EDK II Mar 25 01:16:08.349742 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Mar 25 01:16:08.349748 kernel: random: crng init done Mar 25 01:16:08.349754 kernel: secureboot: Secure boot disabled Mar 25 01:16:08.349759 kernel: ACPI: Early table checksum verification disabled Mar 25 01:16:08.349765 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 25 01:16:08.349771 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349776 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349784 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 25 01:16:08.349791 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349797 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349803 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349810 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349816 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349822 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349828 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 25 01:16:08.349834 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 25 01:16:08.349839 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 25 01:16:08.349845 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 25 01:16:08.349851 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 25 01:16:08.349858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 25 01:16:08.349863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 25 01:16:08.349869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 25 01:16:08.349877 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 25 01:16:08.349883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 25 01:16:08.349888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 25 01:16:08.349894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 25 01:16:08.349900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 25 01:16:08.349906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 25 01:16:08.349911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 25 01:16:08.349917 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 25 01:16:08.349923 kernel: Zone ranges: Mar 25 01:16:08.349930 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 25 01:16:08.349935 kernel: DMA32 empty Mar 25 01:16:08.349941 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 25 01:16:08.349952 kernel: Movable zone start for each node Mar 25 01:16:08.349958 kernel: Early memory node ranges Mar 25 01:16:08.349964 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 25 01:16:08.349971 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Mar 25 01:16:08.349977 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Mar 25 01:16:08.349984 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Mar 25 01:16:08.349991 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 25 01:16:08.349997 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 25 01:16:08.350003 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 25 01:16:08.350009 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 25 01:16:08.350015 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 25 01:16:08.350022 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 25 01:16:08.350028 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 25 01:16:08.350034 kernel: psci: probing for conduit method from ACPI. Mar 25 01:16:08.350041 kernel: psci: PSCIv1.1 detected in firmware. Mar 25 01:16:08.350047 kernel: psci: Using standard PSCI v0.2 function IDs Mar 25 01:16:08.350053 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 25 01:16:08.350060 kernel: psci: SMC Calling Convention v1.4 Mar 25 01:16:08.350067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 25 01:16:08.350073 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 25 01:16:08.350079 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 25 01:16:08.350085 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 25 01:16:08.350092 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 25 01:16:08.350098 kernel: Detected PIPT I-cache on CPU0 Mar 25 01:16:08.350104 kernel: CPU features: detected: GIC system register CPU interface Mar 25 01:16:08.350111 kernel: CPU features: detected: Hardware dirty bit management Mar 25 01:16:08.350117 kernel: CPU features: detected: Spectre-BHB Mar 25 01:16:08.350124 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 25 01:16:08.350132 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 25 01:16:08.350138 kernel: CPU features: detected: ARM erratum 1418040 Mar 25 01:16:08.350144 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 25 01:16:08.350151 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 25 01:16:08.350157 kernel: alternatives: applying boot alternatives Mar 25 01:16:08.350177 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:16:08.350184 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:16:08.350191 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:16:08.350198 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:16:08.350205 kernel: Fallback order for Node 0: 0 Mar 25 01:16:08.350211 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 25 01:16:08.350219 kernel: Policy zone: Normal Mar 25 01:16:08.350225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:16:08.350232 kernel: software IO TLB: area num 2. Mar 25 01:16:08.350238 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Mar 25 01:16:08.350245 kernel: Memory: 3983528K/4194160K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 210632K reserved, 0K cma-reserved) Mar 25 01:16:08.350251 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:16:08.350258 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:16:08.350265 kernel: rcu: RCU event tracing is enabled. Mar 25 01:16:08.350271 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:16:08.350278 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:16:08.350284 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:16:08.350292 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:16:08.350298 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:16:08.350305 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 25 01:16:08.350311 kernel: GICv3: 960 SPIs implemented Mar 25 01:16:08.350317 kernel: GICv3: 0 Extended SPIs implemented Mar 25 01:16:08.350324 kernel: Root IRQ handler: gic_handle_irq Mar 25 01:16:08.350330 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 25 01:16:08.350336 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 25 01:16:08.350342 kernel: ITS: No ITS available, not enabling LPIs Mar 25 01:16:08.350349 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:16:08.350355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:16:08.350361 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 25 01:16:08.350369 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 25 01:16:08.350376 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 25 01:16:08.350382 kernel: Console: colour dummy device 80x25 Mar 25 01:16:08.350389 kernel: printk: console [tty1] enabled Mar 25 01:16:08.350396 kernel: ACPI: Core revision 20230628 Mar 25 01:16:08.350402 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 25 01:16:08.350409 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:16:08.350415 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:16:08.350422 kernel: landlock: Up and running. Mar 25 01:16:08.350430 kernel: SELinux: Initializing. Mar 25 01:16:08.350436 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:16:08.350443 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:16:08.350449 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:16:08.350456 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:16:08.350463 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 25 01:16:08.350469 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 25 01:16:08.350482 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 25 01:16:08.350489 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:16:08.350496 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:16:08.350503 kernel: Remapping and enabling EFI services. Mar 25 01:16:08.350509 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:16:08.350518 kernel: Detected PIPT I-cache on CPU1 Mar 25 01:16:08.350524 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 25 01:16:08.350552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 25 01:16:08.350560 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 25 01:16:08.350567 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:16:08.350577 kernel: SMP: Total of 2 processors activated. Mar 25 01:16:08.350583 kernel: CPU features: detected: 32-bit EL0 Support Mar 25 01:16:08.350590 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 25 01:16:08.350597 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 25 01:16:08.350604 kernel: CPU features: detected: CRC32 instructions Mar 25 01:16:08.350611 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 25 01:16:08.350617 kernel: CPU features: detected: LSE atomic instructions Mar 25 01:16:08.350624 kernel: CPU features: detected: Privileged Access Never Mar 25 01:16:08.350631 kernel: CPU: All CPU(s) started at EL1 Mar 25 01:16:08.350639 kernel: alternatives: applying system-wide alternatives Mar 25 01:16:08.350646 kernel: devtmpfs: initialized Mar 25 01:16:08.350653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:16:08.350660 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:16:08.350667 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:16:08.350677 kernel: SMBIOS 3.1.0 present. Mar 25 01:16:08.350684 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 25 01:16:08.350691 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:16:08.350702 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 25 01:16:08.350711 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 25 01:16:08.350718 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 25 01:16:08.350724 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:16:08.350731 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 25 01:16:08.350738 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:16:08.350745 kernel: cpuidle: using governor menu Mar 25 01:16:08.350751 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 25 01:16:08.350758 kernel: ASID allocator initialised with 32768 entries Mar 25 01:16:08.350765 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:16:08.350773 kernel: Serial: AMBA PL011 UART driver Mar 25 01:16:08.350780 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 25 01:16:08.350787 kernel: Modules: 0 pages in range for non-PLT usage Mar 25 01:16:08.350794 kernel: Modules: 509248 pages in range for PLT usage Mar 25 01:16:08.350800 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:16:08.350807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:16:08.350814 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 25 01:16:08.350821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 25 01:16:08.350828 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:16:08.350836 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:16:08.350843 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 25 01:16:08.350850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 25 01:16:08.350856 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:16:08.350863 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:16:08.350870 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:16:08.350876 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:16:08.350883 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:16:08.350890 kernel: ACPI: Interpreter enabled Mar 25 01:16:08.350898 kernel: ACPI: Using GIC for interrupt routing Mar 25 01:16:08.350905 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 25 01:16:08.350912 kernel: printk: console [ttyAMA0] enabled Mar 25 01:16:08.350918 kernel: printk: bootconsole [pl11] disabled Mar 25 01:16:08.350925 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 25 01:16:08.350932 kernel: iommu: Default domain type: Translated Mar 25 01:16:08.350939 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 25 01:16:08.350946 kernel: efivars: Registered efivars operations Mar 25 01:16:08.350953 kernel: vgaarb: loaded Mar 25 01:16:08.350961 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 25 01:16:08.350968 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:16:08.350974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:16:08.350981 kernel: pnp: PnP ACPI init Mar 25 01:16:08.350988 kernel: pnp: PnP ACPI: found 0 devices Mar 25 01:16:08.350995 kernel: NET: Registered PF_INET protocol family Mar 25 01:16:08.351001 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:16:08.351008 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 01:16:08.351015 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:16:08.351023 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:16:08.351030 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 01:16:08.351037 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 01:16:08.351044 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:16:08.351051 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:16:08.351058 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:16:08.351065 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:16:08.351071 kernel: kvm [1]: HYP mode not available Mar 25 01:16:08.351078 kernel: Initialise system trusted keyrings Mar 25 01:16:08.351086 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 01:16:08.351093 kernel: Key type asymmetric registered Mar 25 01:16:08.351100 kernel: Asymmetric key parser 'x509' registered Mar 25 01:16:08.351106 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 25 01:16:08.351113 kernel: io scheduler mq-deadline registered Mar 25 01:16:08.351120 kernel: io scheduler kyber registered Mar 25 01:16:08.351126 kernel: io scheduler bfq registered Mar 25 01:16:08.351133 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:16:08.351140 kernel: thunder_xcv, ver 1.0 Mar 25 01:16:08.351148 kernel: thunder_bgx, ver 1.0 Mar 25 01:16:08.351155 kernel: nicpf, ver 1.0 Mar 25 01:16:08.351161 kernel: nicvf, ver 1.0 Mar 25 01:16:08.351286 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 25 01:16:08.351358 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-25T01:16:07 UTC (1742865367) Mar 25 01:16:08.351367 kernel: efifb: probing for efifb Mar 25 01:16:08.351374 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 25 01:16:08.351381 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 25 01:16:08.351390 kernel: efifb: scrolling: redraw Mar 25 01:16:08.351396 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 25 01:16:08.351403 kernel: Console: switching to colour frame buffer device 128x48 Mar 25 01:16:08.351410 kernel: fb0: EFI VGA frame buffer device Mar 25 01:16:08.351417 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 25 01:16:08.351423 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 25 01:16:08.351430 kernel: No ACPI PMU IRQ for CPU0 Mar 25 01:16:08.351437 kernel: No ACPI PMU IRQ for CPU1 Mar 25 01:16:08.351444 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 25 01:16:08.351452 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 25 01:16:08.351459 kernel: watchdog: Hard watchdog permanently disabled Mar 25 01:16:08.351466 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:16:08.351472 kernel: Segment Routing with IPv6 Mar 25 01:16:08.351479 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:16:08.351486 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:16:08.351493 kernel: Key type dns_resolver registered Mar 25 01:16:08.351499 kernel: registered taskstats version 1 Mar 25 01:16:08.351506 kernel: Loading compiled-in X.509 certificates Mar 25 01:16:08.351514 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ed4ababe871f0afac8b4236504477de11a6baf07' Mar 25 01:16:08.351521 kernel: Key type .fscrypt registered Mar 25 01:16:08.351538 kernel: Key type fscrypt-provisioning registered Mar 25 01:16:08.351546 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:16:08.351553 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:16:08.351560 kernel: ima: No architecture policies found Mar 25 01:16:08.351567 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 25 01:16:08.351574 kernel: clk: Disabling unused clocks Mar 25 01:16:08.351581 kernel: Freeing unused kernel memory: 38464K Mar 25 01:16:08.351590 kernel: Run /init as init process Mar 25 01:16:08.351597 kernel: with arguments: Mar 25 01:16:08.351603 kernel: /init Mar 25 01:16:08.351610 kernel: with environment: Mar 25 01:16:08.351616 kernel: HOME=/ Mar 25 01:16:08.351623 kernel: TERM=linux Mar 25 01:16:08.351630 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:16:08.351637 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:16:08.351649 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:16:08.351656 systemd[1]: Detected virtualization microsoft. Mar 25 01:16:08.351664 systemd[1]: Detected architecture arm64. Mar 25 01:16:08.351671 systemd[1]: Running in initrd. Mar 25 01:16:08.351678 systemd[1]: No hostname configured, using default hostname. Mar 25 01:16:08.351686 systemd[1]: Hostname set to . Mar 25 01:16:08.351693 systemd[1]: Initializing machine ID from random generator. Mar 25 01:16:08.351700 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:16:08.351710 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:16:08.351717 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:16:08.351725 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:16:08.351733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:16:08.351740 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:16:08.351748 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:16:08.351757 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:16:08.351766 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:16:08.351774 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:16:08.351781 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:16:08.351788 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:16:08.351796 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:16:08.351803 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:16:08.351811 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:16:08.351818 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:16:08.351827 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:16:08.351835 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:16:08.351842 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:16:08.351850 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:16:08.351858 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:16:08.351865 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:16:08.351873 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:16:08.351880 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:16:08.351888 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:16:08.351897 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:16:08.351904 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:16:08.351911 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:16:08.351919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:16:08.351941 systemd-journald[218]: Collecting audit messages is disabled. Mar 25 01:16:08.351960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:08.351969 systemd-journald[218]: Journal started Mar 25 01:16:08.351987 systemd-journald[218]: Runtime Journal (/run/log/journal/6a6b39b93b33425d9ce44104f956a4b8) is 8M, max 78.5M, 70.5M free. Mar 25 01:16:08.368850 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:16:08.385558 systemd-modules-load[220]: Inserted module 'overlay' Mar 25 01:16:08.401182 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:16:08.408274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:16:08.432665 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:16:08.432690 kernel: Bridge firewalling registered Mar 25 01:16:08.431825 systemd-modules-load[220]: Inserted module 'br_netfilter' Mar 25 01:16:08.438485 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:16:08.448789 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:16:08.460259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:08.478710 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:16:08.499047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:16:08.512728 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:16:08.542752 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:16:08.562572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:08.575910 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:08.591610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:16:08.618463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:16:08.628980 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:16:08.654659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:16:08.673720 dracut-cmdline[251]: dracut-dracut-053 Mar 25 01:16:08.673893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:16:08.698014 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:16:08.744069 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:16:08.759784 systemd-resolved[252]: Positive Trust Anchors: Mar 25 01:16:08.759795 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:16:08.759826 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:16:08.762941 systemd-resolved[252]: Defaulting to hostname 'linux'. Mar 25 01:16:08.768349 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:16:08.777617 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:16:08.921562 kernel: SCSI subsystem initialized Mar 25 01:16:08.930549 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:16:08.940549 kernel: iscsi: registered transport (tcp) Mar 25 01:16:08.959589 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:16:08.959634 kernel: QLogic iSCSI HBA Driver Mar 25 01:16:09.003943 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:16:09.015734 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:16:09.065565 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:16:09.065665 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:16:09.073076 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:16:09.123563 kernel: raid6: neonx8 gen() 15770 MB/s Mar 25 01:16:09.144542 kernel: raid6: neonx4 gen() 15807 MB/s Mar 25 01:16:09.166541 kernel: raid6: neonx2 gen() 13227 MB/s Mar 25 01:16:09.186540 kernel: raid6: neonx1 gen() 10425 MB/s Mar 25 01:16:09.206540 kernel: raid6: int64x8 gen() 6792 MB/s Mar 25 01:16:09.228541 kernel: raid6: int64x4 gen() 7349 MB/s Mar 25 01:16:09.248541 kernel: raid6: int64x2 gen() 6115 MB/s Mar 25 01:16:09.272746 kernel: raid6: int64x1 gen() 5053 MB/s Mar 25 01:16:09.272756 kernel: raid6: using algorithm neonx4 gen() 15807 MB/s Mar 25 01:16:09.298730 kernel: raid6: .... xor() 12591 MB/s, rmw enabled Mar 25 01:16:09.298743 kernel: raid6: using neon recovery algorithm Mar 25 01:16:09.307545 kernel: xor: measuring software checksum speed Mar 25 01:16:09.314716 kernel: 8regs : 20138 MB/sec Mar 25 01:16:09.314727 kernel: 32regs : 21664 MB/sec Mar 25 01:16:09.318729 kernel: arm64_neon : 28013 MB/sec Mar 25 01:16:09.323821 kernel: xor: using function: arm64_neon (28013 MB/sec) Mar 25 01:16:09.374554 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:16:09.389566 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:16:09.404266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:16:09.438877 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 25 01:16:09.444825 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:16:09.456688 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:16:09.487902 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Mar 25 01:16:09.534689 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:16:09.543670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:16:09.605575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:16:09.625670 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:16:09.660562 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:16:09.674558 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:16:09.690966 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:16:09.711481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:16:09.720948 kernel: hv_vmbus: Vmbus version:5.3 Mar 25 01:16:09.729873 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:16:09.761075 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 25 01:16:09.761125 kernel: hv_vmbus: registering driver hid_hyperv Mar 25 01:16:09.761136 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 25 01:16:09.754778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:16:09.828908 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 25 01:16:09.828938 kernel: hv_vmbus: registering driver hv_storvsc Mar 25 01:16:09.828948 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 25 01:16:09.829080 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 25 01:16:09.829090 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 25 01:16:09.829100 kernel: hv_vmbus: registering driver hv_netvsc Mar 25 01:16:09.829108 kernel: scsi host0: storvsc_host_t Mar 25 01:16:09.829201 kernel: scsi host1: storvsc_host_t Mar 25 01:16:09.754941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:09.851595 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 25 01:16:09.798748 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:16:09.847149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:16:09.903970 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 25 01:16:09.847374 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:09.930702 kernel: PTP clock support registered Mar 25 01:16:09.867393 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:09.957000 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 25 01:16:09.966670 kernel: hv_netvsc 000d3a06-cd00-000d-3a06-cd00000d3a06 eth0: VF slot 1 added Mar 25 01:16:09.966799 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 25 01:16:09.966809 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 25 01:16:09.885860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:09.988680 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 25 01:16:10.039547 kernel: hv_utils: Registering HyperV Utility Driver Mar 25 01:16:10.039566 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 25 01:16:10.039695 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 25 01:16:10.039789 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 25 01:16:10.039886 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 25 01:16:10.039978 kernel: hv_vmbus: registering driver hv_utils Mar 25 01:16:10.039988 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:10.039996 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 25 01:16:10.040078 kernel: hv_utils: Heartbeat IC version 3.0 Mar 25 01:16:10.040088 kernel: hv_utils: Shutdown IC version 3.2 Mar 25 01:16:09.917321 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:09.918366 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:16:10.300926 kernel: hv_utils: TimeSync IC version 4.0 Mar 25 01:16:10.300957 kernel: hv_vmbus: registering driver hv_pci Mar 25 01:16:10.300967 kernel: hv_pci c907a81c-6bf5-4d57-b33f-803df714e660: PCI VMBus probing: Using version 0x10004 Mar 25 01:16:10.396713 kernel: hv_pci c907a81c-6bf5-4d57-b33f-803df714e660: PCI host bridge to bus 6bf5:00 Mar 25 01:16:10.397121 kernel: pci_bus 6bf5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 25 01:16:10.397245 kernel: pci_bus 6bf5:00: No busn resource found for root bus, will use [bus 00-ff] Mar 25 01:16:10.397341 kernel: pci 6bf5:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 25 01:16:10.397453 kernel: pci 6bf5:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 25 01:16:10.397552 kernel: pci 6bf5:00:02.0: enabling Extended Tags Mar 25 01:16:10.397692 kernel: pci 6bf5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6bf5:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 25 01:16:10.397791 kernel: pci_bus 6bf5:00: busn_res: [bus 00-ff] end is updated to 00 Mar 25 01:16:10.397873 kernel: pci 6bf5:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 25 01:16:09.990613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:10.003682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:16:10.287724 systemd-resolved[252]: Clock change detected. Flushing caches. Mar 25 01:16:10.337077 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:10.442126 kernel: mlx5_core 6bf5:00:02.0: enabling device (0000 -> 0002) Mar 25 01:16:10.663978 kernel: mlx5_core 6bf5:00:02.0: firmware version: 16.30.1284 Mar 25 01:16:10.664128 kernel: hv_netvsc 000d3a06-cd00-000d-3a06-cd00000d3a06 eth0: VF registering: eth1 Mar 25 01:16:10.664273 kernel: mlx5_core 6bf5:00:02.0 eth1: joined to eth0 Mar 25 01:16:10.664373 kernel: mlx5_core 6bf5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 25 01:16:10.674654 kernel: mlx5_core 6bf5:00:02.0 enP27637s1: renamed from eth1 Mar 25 01:16:11.382533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 25 01:16:11.422154 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (492) Mar 25 01:16:11.439831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 25 01:16:11.485083 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 25 01:16:11.519632 kernel: BTRFS: device fsid bf348154-9cb1-474d-801c-0e035a5758cf devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (494) Mar 25 01:16:11.538022 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 25 01:16:11.547176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 25 01:16:11.565791 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:16:11.609625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:11.620645 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:12.628789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:16:12.628852 disk-uuid[606]: The operation has completed successfully. Mar 25 01:16:12.700736 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:16:12.700843 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:16:12.749364 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:16:12.770998 sh[692]: Success Mar 25 01:16:12.800677 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 25 01:16:13.011253 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:16:13.029742 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:16:13.040550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:16:13.075584 kernel: BTRFS info (device dm-0): first mount of filesystem bf348154-9cb1-474d-801c-0e035a5758cf Mar 25 01:16:13.075672 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:13.082822 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:16:13.088037 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:16:13.092433 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:16:13.384913 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:16:13.390635 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:16:13.393734 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:16:13.403734 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:16:13.463776 kernel: BTRFS info (device sda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:13.463836 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:13.463857 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:16:13.500709 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:16:13.512822 kernel: BTRFS info (device sda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:13.517089 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:16:13.527980 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:16:13.563153 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:16:13.577850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:16:13.620011 systemd-networkd[873]: lo: Link UP Mar 25 01:16:13.620020 systemd-networkd[873]: lo: Gained carrier Mar 25 01:16:13.622725 systemd-networkd[873]: Enumeration completed Mar 25 01:16:13.623038 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:16:13.630560 systemd[1]: Reached target network.target - Network. Mar 25 01:16:13.634441 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:13.634444 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:16:13.720631 kernel: mlx5_core 6bf5:00:02.0 enP27637s1: Link up Mar 25 01:16:13.762666 kernel: hv_netvsc 000d3a06-cd00-000d-3a06-cd00000d3a06 eth0: Data path switched to VF: enP27637s1 Mar 25 01:16:13.763067 systemd-networkd[873]: enP27637s1: Link UP Mar 25 01:16:13.766807 systemd-networkd[873]: eth0: Link UP Mar 25 01:16:13.766965 systemd-networkd[873]: eth0: Gained carrier Mar 25 01:16:13.766975 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:13.774828 systemd-networkd[873]: enP27637s1: Gained carrier Mar 25 01:16:13.792675 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 25 01:16:14.400077 ignition[862]: Ignition 2.20.0 Mar 25 01:16:14.400088 ignition[862]: Stage: fetch-offline Mar 25 01:16:14.405429 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:16:14.400122 ignition[862]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:14.417749 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:16:14.400130 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:14.400227 ignition[862]: parsed url from cmdline: "" Mar 25 01:16:14.400230 ignition[862]: no config URL provided Mar 25 01:16:14.400234 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:16:14.400241 ignition[862]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:16:14.400245 ignition[862]: failed to fetch config: resource requires networking Mar 25 01:16:14.400563 ignition[862]: Ignition finished successfully Mar 25 01:16:14.470145 ignition[882]: Ignition 2.20.0 Mar 25 01:16:14.470152 ignition[882]: Stage: fetch Mar 25 01:16:14.470419 ignition[882]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:14.470431 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:14.470553 ignition[882]: parsed url from cmdline: "" Mar 25 01:16:14.470557 ignition[882]: no config URL provided Mar 25 01:16:14.470562 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:16:14.470573 ignition[882]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:16:14.470612 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 25 01:16:14.593864 ignition[882]: GET result: OK Mar 25 01:16:14.593955 ignition[882]: config has been read from IMDS userdata Mar 25 01:16:14.594010 ignition[882]: parsing config with SHA512: 1cabcc27b855914efd16a726245dd7d7918e3ae64f1af6c95645f4830986d317b1cf601d2baa42e79caa7f2ca00051010ed1e00780839e90307c8e0202fa4591 Mar 25 01:16:14.598676 unknown[882]: fetched base config from "system" Mar 25 01:16:14.599041 ignition[882]: fetch: fetch complete Mar 25 01:16:14.598684 unknown[882]: fetched base config from "system" Mar 25 01:16:14.599047 ignition[882]: fetch: fetch passed Mar 25 01:16:14.598689 unknown[882]: fetched user config from "azure" Mar 25 01:16:14.599091 ignition[882]: Ignition finished successfully Mar 25 01:16:14.604462 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:16:14.619864 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:16:14.665857 ignition[889]: Ignition 2.20.0 Mar 25 01:16:14.667496 ignition[889]: Stage: kargs Mar 25 01:16:14.670464 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:16:14.667697 ignition[889]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:14.683763 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:16:14.667708 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:14.668783 ignition[889]: kargs: kargs passed Mar 25 01:16:14.668833 ignition[889]: Ignition finished successfully Mar 25 01:16:14.720370 ignition[896]: Ignition 2.20.0 Mar 25 01:16:14.720381 ignition[896]: Stage: disks Mar 25 01:16:14.725196 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:16:14.720575 ignition[896]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:14.734741 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:16:14.720587 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:14.747636 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:16:14.721646 ignition[896]: disks: disks passed Mar 25 01:16:14.761713 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:16:14.721704 ignition[896]: Ignition finished successfully Mar 25 01:16:14.774342 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:16:14.785852 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:16:14.801768 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:16:14.889156 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 25 01:16:14.899692 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:16:14.910748 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:16:14.973351 kernel: EXT4-fs (sda9): mounted filesystem a7a89271-ee7d-4bda-a834-705261d6cda9 r/w with ordered data mode. Quota mode: none. Mar 25 01:16:14.973792 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:16:14.978907 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:16:15.019108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:16:15.038462 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:16:15.049834 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 25 01:16:15.058730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:16:15.058764 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:16:15.072289 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:16:15.113730 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (916) Mar 25 01:16:15.092747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:16:15.132968 kernel: BTRFS info (device sda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:15.133025 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:15.137065 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:16:15.149838 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:16:15.144513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:16:15.328721 systemd-networkd[873]: enP27637s1: Gained IPv6LL Mar 25 01:16:15.392725 systemd-networkd[873]: eth0: Gained IPv6LL Mar 25 01:16:15.729315 coreos-metadata[918]: Mar 25 01:16:15.729 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 25 01:16:15.740369 coreos-metadata[918]: Mar 25 01:16:15.740 INFO Fetch successful Mar 25 01:16:15.747379 coreos-metadata[918]: Mar 25 01:16:15.747 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 25 01:16:15.760789 coreos-metadata[918]: Mar 25 01:16:15.760 INFO Fetch successful Mar 25 01:16:15.774586 coreos-metadata[918]: Mar 25 01:16:15.774 INFO wrote hostname ci-4284.0.0-a-f1ebfb6c0b to /sysroot/etc/hostname Mar 25 01:16:15.784297 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 01:16:16.038450 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:16:16.064073 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:16:16.105789 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:16:16.113386 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:16:16.949955 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:16:16.959711 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:16:16.981314 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:16:16.995450 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:16:17.007349 kernel: BTRFS info (device sda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:17.022582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:16:17.033306 ignition[1040]: INFO : Ignition 2.20.0 Mar 25 01:16:17.033306 ignition[1040]: INFO : Stage: mount Mar 25 01:16:17.041512 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:17.041512 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:17.041512 ignition[1040]: INFO : mount: mount passed Mar 25 01:16:17.041512 ignition[1040]: INFO : Ignition finished successfully Mar 25 01:16:17.040419 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:16:17.050725 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:16:17.078786 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:16:17.119611 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (1050) Mar 25 01:16:17.119688 kernel: BTRFS info (device sda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:16:17.125836 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:16:17.129911 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:16:17.138619 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:16:17.138790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:16:17.172640 ignition[1067]: INFO : Ignition 2.20.0 Mar 25 01:16:17.172640 ignition[1067]: INFO : Stage: files Mar 25 01:16:17.172640 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:17.172640 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:17.193201 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:16:17.204554 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:16:17.204554 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:16:17.278492 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:16:17.286068 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:16:17.286068 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:16:17.279458 unknown[1067]: wrote ssh authorized keys file for user: core Mar 25 01:16:17.306536 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 25 01:16:17.317019 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 25 01:16:17.688133 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:16:17.842511 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 25 01:16:17.842511 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 25 01:16:17.861686 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 25 01:16:18.272970 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 25 01:16:18.620433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 25 01:16:18.620433 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 25 01:16:18.673903 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:16:18.686921 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:16:18.686921 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 25 01:16:18.686921 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:16:18.686921 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:16:18.686921 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:16:18.686921 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:16:18.686921 ignition[1067]: INFO : files: files passed Mar 25 01:16:18.686921 ignition[1067]: INFO : Ignition finished successfully Mar 25 01:16:18.687888 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:16:18.708836 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:16:18.742969 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:16:18.775640 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:16:18.777642 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:16:18.823152 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:16:18.823152 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:16:18.819691 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:16:18.870912 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:16:18.830851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:16:18.849749 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:16:18.911250 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:16:18.911587 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:16:18.926644 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:16:18.939612 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:16:18.951173 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:16:18.953785 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:16:18.992721 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:16:19.004789 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:16:19.034711 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:16:19.042142 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:16:19.057028 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:16:19.070266 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:16:19.070388 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:16:19.087415 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:16:19.093862 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:16:19.106404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:16:19.119431 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:16:19.132074 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:16:19.145908 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:16:19.159643 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:16:19.174208 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:16:19.186880 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:16:19.200495 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:16:19.211945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:16:19.212076 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:16:19.229178 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:16:19.236195 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:16:19.249552 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:16:19.255143 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:16:19.262663 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:16:19.262778 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:16:19.281796 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:16:19.281918 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:16:19.289425 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:16:19.289520 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:16:19.301570 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 25 01:16:19.388289 ignition[1121]: INFO : Ignition 2.20.0 Mar 25 01:16:19.388289 ignition[1121]: INFO : Stage: umount Mar 25 01:16:19.388289 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:16:19.388289 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 25 01:16:19.388289 ignition[1121]: INFO : umount: umount passed Mar 25 01:16:19.388289 ignition[1121]: INFO : Ignition finished successfully Mar 25 01:16:19.301687 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 01:16:19.319820 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:16:19.366869 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:16:19.373818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:16:19.373980 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:16:19.381649 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:16:19.381745 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:16:19.397051 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:16:19.400666 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:16:19.409384 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:16:19.409518 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:16:19.423760 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:16:19.423823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:16:19.436269 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:16:19.436330 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:16:19.449064 systemd[1]: Stopped target network.target - Network. Mar 25 01:16:19.457774 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:16:19.457839 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:16:19.470990 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:16:19.476612 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:16:19.484625 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:16:19.492389 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:16:19.510108 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:16:19.522483 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:16:19.522615 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:16:19.535462 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:16:19.535497 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:16:19.547773 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:16:19.547829 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:16:19.561761 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:16:19.561814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:16:19.575338 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:16:19.587430 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:16:19.601321 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:16:19.602070 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:16:19.898206 kernel: hv_netvsc 000d3a06-cd00-000d-3a06-cd00000d3a06 eth0: Data path switched from VF: enP27637s1 Mar 25 01:16:19.602161 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:16:19.616416 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:16:19.616517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:16:19.646685 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:16:19.647067 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:16:19.647201 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:16:19.670193 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:16:19.673274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:16:19.673342 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:16:19.687722 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:16:19.704830 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:16:19.704942 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:16:19.721320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:16:19.721396 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:19.743518 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:16:19.743585 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:16:19.751592 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:16:19.751667 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:16:19.770162 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:16:19.784186 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:16:19.784258 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:19.808008 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:16:19.809116 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:16:19.822124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:16:19.822178 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:16:19.834241 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:16:19.834289 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:16:19.847584 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:16:19.847791 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:16:20.203156 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 25 01:16:19.879993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:16:19.880057 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:16:19.898015 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:16:19.898085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:16:19.922827 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:16:19.945665 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:16:19.945746 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:16:19.965856 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 25 01:16:19.965914 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:16:19.978319 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:16:19.978379 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:16:19.994016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:16:19.994068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:20.014389 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:16:20.014458 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:20.014814 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:16:20.014939 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:16:20.026640 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:16:20.026747 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:16:20.039371 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:16:20.039462 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:16:20.056250 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:16:20.067418 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:16:20.067508 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:16:20.082772 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:16:20.127710 systemd[1]: Switching root. Mar 25 01:16:20.377452 systemd-journald[218]: Journal stopped Mar 25 01:16:25.904668 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:16:25.904697 kernel: SELinux: policy capability open_perms=1 Mar 25 01:16:25.904708 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:16:25.904716 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:16:25.904726 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:16:25.904734 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:16:25.904743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:16:25.904751 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:16:25.904760 kernel: audit: type=1403 audit(1742865381.163:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:16:25.904770 systemd[1]: Successfully loaded SELinux policy in 209.101ms. Mar 25 01:16:25.904782 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.104ms. Mar 25 01:16:25.904792 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:16:25.904801 systemd[1]: Detected virtualization microsoft. Mar 25 01:16:25.904809 systemd[1]: Detected architecture arm64. Mar 25 01:16:25.904818 systemd[1]: Detected first boot. Mar 25 01:16:25.904831 systemd[1]: Hostname set to . Mar 25 01:16:25.904840 systemd[1]: Initializing machine ID from random generator. Mar 25 01:16:25.904849 zram_generator::config[1166]: No configuration found. Mar 25 01:16:25.904858 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:16:25.904867 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:16:25.904876 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:16:25.904885 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:16:25.904895 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:16:25.904904 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:16:25.904913 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:16:25.904923 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:16:25.904932 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:16:25.904941 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:16:25.904950 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:16:25.904961 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:16:25.904970 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:16:25.904979 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:16:25.904988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:16:25.904997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:16:25.905006 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:16:25.905015 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:16:25.905025 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:16:25.905035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:16:25.905044 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 25 01:16:25.905053 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:16:25.905064 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:16:25.905073 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:16:25.905083 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:16:25.905092 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:16:25.905101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:16:25.905112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:16:25.905121 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:16:25.905130 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:16:25.905140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:16:25.905149 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:16:25.905159 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:16:25.905170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:16:25.905179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:16:25.905188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:16:25.905198 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:16:25.905207 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:16:25.905216 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:16:25.905226 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:16:25.905237 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:16:25.905246 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:16:25.905255 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:16:25.905268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:16:25.905278 systemd[1]: Reached target machines.target - Containers. Mar 25 01:16:25.905287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:16:25.905296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:16:25.905305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:16:25.905316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:16:25.905326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:16:25.905335 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:16:25.905344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:16:25.905354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:16:25.905363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:16:25.905372 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:16:25.905382 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:16:25.905392 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:16:25.905402 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:16:25.905411 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:16:25.905420 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:16:25.905430 kernel: loop: module loaded Mar 25 01:16:25.905439 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:16:25.905447 kernel: fuse: init (API version 7.39) Mar 25 01:16:25.905456 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:16:25.905465 kernel: ACPI: bus type drm_connector registered Mar 25 01:16:25.905475 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:16:25.905503 systemd-journald[1270]: Collecting audit messages is disabled. Mar 25 01:16:25.905524 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:16:25.905535 systemd-journald[1270]: Journal started Mar 25 01:16:25.905558 systemd-journald[1270]: Runtime Journal (/run/log/journal/eb2c61a4ab834a38aecc31014de2382d) is 8M, max 78.5M, 70.5M free. Mar 25 01:16:24.895915 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:16:24.902767 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 25 01:16:24.903176 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:16:24.903525 systemd[1]: systemd-journald.service: Consumed 3.533s CPU time. Mar 25 01:16:25.944502 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:16:25.962959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:16:25.974844 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:16:25.974907 systemd[1]: Stopped verity-setup.service. Mar 25 01:16:25.995691 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:16:25.996104 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:16:26.002581 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:16:26.009812 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:16:26.015885 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:16:26.022840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:16:26.029129 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:16:26.034741 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:16:26.041648 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:16:26.049026 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:16:26.049187 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:16:26.056225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:16:26.056387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:16:26.064225 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:16:26.064430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:16:26.070946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:16:26.072629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:16:26.079702 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:16:26.079871 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:16:26.086313 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:16:26.086466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:16:26.092824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:16:26.099565 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:16:26.107166 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:16:26.114330 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:16:26.123186 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:16:26.139438 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:16:26.147214 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:16:26.164280 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:16:26.171372 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:16:26.171407 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:16:26.178354 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:16:26.186475 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:16:26.204799 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:16:26.212177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:16:26.218659 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:16:26.226030 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:16:26.234942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:16:26.237772 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:16:26.244696 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:16:26.252727 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:16:26.259733 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:16:26.273785 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:16:26.286780 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:16:26.307039 systemd-journald[1270]: Time spent on flushing to /var/log/journal/eb2c61a4ab834a38aecc31014de2382d is 13.597ms for 910 entries. Mar 25 01:16:26.307039 systemd-journald[1270]: System Journal (/var/log/journal/eb2c61a4ab834a38aecc31014de2382d) is 8M, max 2.6G, 2.6G free. Mar 25 01:16:26.381090 systemd-journald[1270]: Received client request to flush runtime journal. Mar 25 01:16:26.381147 kernel: loop0: detected capacity change from 0 to 201592 Mar 25 01:16:26.305119 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:16:26.318281 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:16:26.331331 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:16:26.341057 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:16:26.349492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:26.368891 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:16:26.383133 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:16:26.400634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:16:26.402665 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:16:26.413541 udevadm[1309]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 25 01:16:26.425165 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Mar 25 01:16:26.425190 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Mar 25 01:16:26.431863 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:16:26.442373 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:16:26.456617 kernel: loop1: detected capacity change from 0 to 28888 Mar 25 01:16:26.473623 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:16:26.475148 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:16:26.763253 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:16:26.771754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:16:26.791368 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 25 01:16:26.791387 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 25 01:16:26.795716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:16:26.860651 kernel: loop2: detected capacity change from 0 to 126448 Mar 25 01:16:27.262809 kernel: loop3: detected capacity change from 0 to 103832 Mar 25 01:16:27.498023 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:16:27.507024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:16:27.543288 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Mar 25 01:16:27.658624 kernel: loop4: detected capacity change from 0 to 201592 Mar 25 01:16:27.668657 kernel: loop5: detected capacity change from 0 to 28888 Mar 25 01:16:27.676706 kernel: loop6: detected capacity change from 0 to 126448 Mar 25 01:16:27.686648 kernel: loop7: detected capacity change from 0 to 103832 Mar 25 01:16:27.689759 (sd-merge)[1335]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 25 01:16:27.690185 (sd-merge)[1335]: Merged extensions into '/usr'. Mar 25 01:16:27.693856 systemd[1]: Reload requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:16:27.693975 systemd[1]: Reloading... Mar 25 01:16:27.752648 zram_generator::config[1359]: No configuration found. Mar 25 01:16:27.916853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:28.005111 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:16:28.038447 systemd[1]: Reloading finished in 344 ms. Mar 25 01:16:28.053488 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:16:28.063773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:16:28.080677 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 25 01:16:28.090451 systemd[1]: Starting ensure-sysext.service... Mar 25 01:16:28.106578 kernel: hv_vmbus: registering driver hv_balloon Mar 25 01:16:28.106667 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 25 01:16:28.103911 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:16:28.116752 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 25 01:16:28.126024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:16:28.153680 kernel: hv_vmbus: registering driver hyperv_fb Mar 25 01:16:28.167753 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 25 01:16:28.167846 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 25 01:16:28.174824 kernel: Console: switching to colour dummy device 80x25 Mar 25 01:16:28.182972 kernel: Console: switching to colour frame buffer device 128x48 Mar 25 01:16:28.183335 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:16:28.232018 systemd-tmpfiles[1456]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:16:28.232242 systemd-tmpfiles[1456]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:16:28.232940 systemd-tmpfiles[1456]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:16:28.233207 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Mar 25 01:16:28.233264 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Mar 25 01:16:28.236756 systemd-tmpfiles[1456]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:16:28.236766 systemd-tmpfiles[1456]: Skipping /boot Mar 25 01:16:28.256418 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:16:28.264549 systemd-tmpfiles[1456]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:16:28.264566 systemd-tmpfiles[1456]: Skipping /boot Mar 25 01:16:28.294461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:16:28.325666 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1430) Mar 25 01:16:28.328952 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:16:28.358800 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:16:28.371699 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:16:28.393912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:16:28.407369 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:16:28.438293 systemd[1]: Reload requested from client PID 1451 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:16:28.438547 systemd[1]: Reloading... Mar 25 01:16:28.542706 zram_generator::config[1580]: No configuration found. Mar 25 01:16:28.551125 systemd-networkd[1452]: lo: Link UP Mar 25 01:16:28.551135 systemd-networkd[1452]: lo: Gained carrier Mar 25 01:16:28.553548 systemd-networkd[1452]: Enumeration completed Mar 25 01:16:28.553974 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:28.554069 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:16:28.555940 systemd-resolved[1529]: Positive Trust Anchors: Mar 25 01:16:28.555961 systemd-resolved[1529]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:16:28.555992 systemd-resolved[1529]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:16:28.562224 systemd-resolved[1529]: Using system hostname 'ci-4284.0.0-a-f1ebfb6c0b'. Mar 25 01:16:28.608964 augenrules[1634]: No rules Mar 25 01:16:28.609867 kernel: mlx5_core 6bf5:00:02.0 enP27637s1: Link up Mar 25 01:16:28.636667 kernel: hv_netvsc 000d3a06-cd00-000d-3a06-cd00000d3a06 eth0: Data path switched to VF: enP27637s1 Mar 25 01:16:28.638341 systemd-networkd[1452]: enP27637s1: Link UP Mar 25 01:16:28.638475 systemd-networkd[1452]: eth0: Link UP Mar 25 01:16:28.638479 systemd-networkd[1452]: eth0: Gained carrier Mar 25 01:16:28.638818 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:28.643946 systemd-networkd[1452]: enP27637s1: Gained carrier Mar 25 01:16:28.650637 systemd-networkd[1452]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 25 01:16:28.679536 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:28.776286 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 25 01:16:28.784275 systemd[1]: Reloading finished in 345 ms. Mar 25 01:16:28.798487 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:16:28.805567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:16:28.822811 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:16:28.823018 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:16:28.831407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:16:28.852870 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:16:28.876904 systemd[1]: Reached target network.target - Network. Mar 25 01:16:28.882208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:16:28.888611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:16:28.890041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:16:28.898210 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:16:28.910167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:16:28.916445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:16:28.919699 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:16:28.926534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:16:28.928774 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:16:28.940287 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:16:28.954172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:28.973644 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:16:28.983322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:16:28.983490 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:16:28.990759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:16:28.990914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:16:29.003388 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:16:29.003567 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:16:29.011056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:16:29.030491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:16:29.030748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:16:29.030886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:16:29.039096 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:16:29.044480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:16:29.047934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:16:29.058666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:16:29.070142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:16:29.083038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:16:29.092861 augenrules[1666]: /sbin/augenrules: No change Mar 25 01:16:29.094033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:16:29.094238 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:16:29.094476 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:16:29.102371 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:16:29.106005 augenrules[1687]: No rules Mar 25 01:16:29.107642 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:16:29.115532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:16:29.116676 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:29.125716 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:16:29.126429 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:16:29.126715 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:16:29.132898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:16:29.133654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:16:29.140518 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:16:29.140697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:16:29.147072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:16:29.147232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:16:29.154257 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:16:29.154415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:16:29.164385 systemd[1]: Finished ensure-sysext.service. Mar 25 01:16:29.177738 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:16:29.186467 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:16:29.192575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:16:29.192693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:16:29.201791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:16:29.265623 lvm[1701]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:16:29.287353 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:16:29.296130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:16:29.303956 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:16:29.319099 lvm[1706]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:16:29.351189 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:16:29.417537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:16:29.920851 systemd-networkd[1452]: eth0: Gained IPv6LL Mar 25 01:16:29.923619 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:16:29.931231 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:16:30.176740 systemd-networkd[1452]: enP27637s1: Gained IPv6LL Mar 25 01:16:31.612495 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:16:31.625012 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:16:31.633312 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:16:31.658809 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:16:31.666138 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:16:31.672458 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:16:31.679680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:16:31.686918 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:16:31.692702 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:16:31.700425 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:16:31.707613 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:16:31.707657 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:16:31.713036 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:16:31.718972 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:16:31.726390 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:16:31.734176 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:16:31.741628 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:16:31.748660 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:16:31.764262 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:16:31.770540 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:16:31.778045 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:16:31.784150 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:16:31.789308 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:16:31.794717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:16:31.794746 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:16:31.797065 systemd[1]: Starting chronyd.service - NTP client/server... Mar 25 01:16:31.811734 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:16:31.821483 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:16:31.836743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:16:31.843252 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:16:31.855742 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:16:31.861045 (chronyd)[1717]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 25 01:16:31.861995 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:16:31.862149 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 25 01:16:31.867798 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 25 01:16:31.874124 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 25 01:16:31.876650 jq[1724]: false Mar 25 01:16:31.878529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:31.885791 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:16:31.892179 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:16:31.899825 KVP[1726]: KVP starting; pid is:1726 Mar 25 01:16:31.901145 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:16:31.912147 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:16:31.915037 kernel: hv_utils: KVP IC version 4.0 Mar 25 01:16:31.914415 KVP[1726]: KVP LIC Version: 3.1 Mar 25 01:16:31.925568 chronyd[1736]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 25 01:16:31.927703 chronyd[1736]: Timezone right/UTC failed leap second check, ignoring Mar 25 01:16:31.929627 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:16:31.927968 chronyd[1736]: Loaded seccomp filter (level 2) Mar 25 01:16:31.939788 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:16:31.948951 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:16:31.951952 extend-filesystems[1725]: Found loop4 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found loop5 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found loop6 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found loop7 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda1 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda2 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda3 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found usr Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda4 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda6 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda7 Mar 25 01:16:31.959866 extend-filesystems[1725]: Found sda9 Mar 25 01:16:31.959866 extend-filesystems[1725]: Checking size of /dev/sda9 Mar 25 01:16:31.953471 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:16:31.957139 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:16:31.981876 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:16:31.991694 systemd[1]: Started chronyd.service - NTP client/server. Mar 25 01:16:32.007827 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:16:32.075574 jq[1743]: true Mar 25 01:16:32.008043 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:16:32.021103 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:16:32.021307 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:16:32.055298 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:16:32.056638 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:16:32.090312 jq[1756]: true Mar 25 01:16:32.107313 (ntainerd)[1757]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:16:32.126358 update_engine[1740]: I20250325 01:16:32.125126 1740 main.cc:92] Flatcar Update Engine starting Mar 25 01:16:32.127898 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:16:32.157086 dbus-daemon[1720]: [system] SELinux support is enabled Mar 25 01:16:32.158431 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:16:32.165915 systemd-logind[1738]: New seat seat0. Mar 25 01:16:32.168167 systemd-logind[1738]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 25 01:16:32.187729 extend-filesystems[1725]: Old size kept for /dev/sda9 Mar 25 01:16:32.187729 extend-filesystems[1725]: Found sr0 Mar 25 01:16:32.213932 update_engine[1740]: I20250325 01:16:32.169159 1740 update_check_scheduler.cc:74] Next update check in 11m48s Mar 25 01:16:32.182520 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 01:16:32.173855 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:16:32.180318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:16:32.180341 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:16:32.188921 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:16:32.188941 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:16:32.206873 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:16:32.208786 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:16:32.223333 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:16:32.243885 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:16:32.297280 tar[1746]: linux-arm64/LICENSE Mar 25 01:16:32.297280 tar[1746]: linux-arm64/helm Mar 25 01:16:32.310661 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1793) Mar 25 01:16:32.366611 coreos-metadata[1719]: Mar 25 01:16:32.365 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 25 01:16:32.372679 coreos-metadata[1719]: Mar 25 01:16:32.371 INFO Fetch successful Mar 25 01:16:32.372679 coreos-metadata[1719]: Mar 25 01:16:32.371 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 25 01:16:32.378703 coreos-metadata[1719]: Mar 25 01:16:32.378 INFO Fetch successful Mar 25 01:16:32.378857 coreos-metadata[1719]: Mar 25 01:16:32.378 INFO Fetching http://168.63.129.16/machine/c1fd036d-9a84-44f2-85f7-b0daf5f86c13/9f9ddc8f%2D5b14%2D4837%2Da308%2Daa9da08d42db.%5Fci%2D4284.0.0%2Da%2Df1ebfb6c0b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 25 01:16:32.381527 coreos-metadata[1719]: Mar 25 01:16:32.381 INFO Fetch successful Mar 25 01:16:32.381527 coreos-metadata[1719]: Mar 25 01:16:32.381 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 25 01:16:32.394852 coreos-metadata[1719]: Mar 25 01:16:32.394 INFO Fetch successful Mar 25 01:16:32.449622 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:16:32.462277 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:16:33.095010 bash[1786]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:16:33.096046 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:16:33.106689 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 25 01:16:33.185364 tar[1746]: linux-arm64/README.md Mar 25 01:16:33.187106 locksmithd[1794]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:16:33.206210 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:16:33.220157 sshd_keygen[1754]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:16:33.239217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:33.246512 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:16:33.263099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:16:33.272827 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:16:33.285592 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 25 01:16:33.300222 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:16:33.300433 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:16:33.310271 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:16:33.336900 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 25 01:16:33.348704 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:16:33.365313 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:16:33.375884 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 25 01:16:33.383388 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:16:33.639079 kubelet[1879]: E0325 01:16:33.638966 1879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:16:33.641843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:16:33.641993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:16:33.642295 systemd[1]: kubelet.service: Consumed 712ms CPU time, 247.9M memory peak. Mar 25 01:16:33.931778 containerd[1757]: time="2025-03-25T01:16:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:16:33.933213 containerd[1757]: time="2025-03-25T01:16:33.933165120Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:16:33.942292 containerd[1757]: time="2025-03-25T01:16:33.942236480Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.08µs" Mar 25 01:16:33.942292 containerd[1757]: time="2025-03-25T01:16:33.942283920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:16:33.942429 containerd[1757]: time="2025-03-25T01:16:33.942305560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:16:33.942525 containerd[1757]: time="2025-03-25T01:16:33.942497720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:16:33.942562 containerd[1757]: time="2025-03-25T01:16:33.942525040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:16:33.942562 containerd[1757]: time="2025-03-25T01:16:33.942553560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:16:33.942654 containerd[1757]: time="2025-03-25T01:16:33.942630800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:16:33.942654 containerd[1757]: time="2025-03-25T01:16:33.942651640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:16:33.942936 containerd[1757]: time="2025-03-25T01:16:33.942909320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:16:33.942984 containerd[1757]: time="2025-03-25T01:16:33.942933880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:16:33.942984 containerd[1757]: time="2025-03-25T01:16:33.942946040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:16:33.942984 containerd[1757]: time="2025-03-25T01:16:33.942954480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:16:33.943099 containerd[1757]: time="2025-03-25T01:16:33.943079000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:16:33.943372 containerd[1757]: time="2025-03-25T01:16:33.943347440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:16:33.943406 containerd[1757]: time="2025-03-25T01:16:33.943386600Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:16:33.943432 containerd[1757]: time="2025-03-25T01:16:33.943403760Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:16:33.943494 containerd[1757]: time="2025-03-25T01:16:33.943434120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:16:33.943775 containerd[1757]: time="2025-03-25T01:16:33.943750280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:16:33.943894 containerd[1757]: time="2025-03-25T01:16:33.943843320Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:16:33.995066 containerd[1757]: time="2025-03-25T01:16:33.994961720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:16:33.995066 containerd[1757]: time="2025-03-25T01:16:33.995048360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:16:33.995066 containerd[1757]: time="2025-03-25T01:16:33.995066520Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:16:33.995066 containerd[1757]: time="2025-03-25T01:16:33.995079960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995097720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995110720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995124360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995136880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995150400Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995162320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995172840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:16:33.995281 containerd[1757]: time="2025-03-25T01:16:33.995186680Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:16:33.995422 containerd[1757]: time="2025-03-25T01:16:33.995360400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:16:33.995422 containerd[1757]: time="2025-03-25T01:16:33.995387400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:16:33.995422 containerd[1757]: time="2025-03-25T01:16:33.995401560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:16:33.995422 containerd[1757]: time="2025-03-25T01:16:33.995413680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:16:33.995495 containerd[1757]: time="2025-03-25T01:16:33.995425640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:16:33.995495 containerd[1757]: time="2025-03-25T01:16:33.995439600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:16:33.995495 containerd[1757]: time="2025-03-25T01:16:33.995458240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:16:33.995495 containerd[1757]: time="2025-03-25T01:16:33.995470200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:16:33.995567 containerd[1757]: time="2025-03-25T01:16:33.995498960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:16:33.995567 containerd[1757]: time="2025-03-25T01:16:33.995512400Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:16:33.995567 containerd[1757]: time="2025-03-25T01:16:33.995527160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:16:33.995701 containerd[1757]: time="2025-03-25T01:16:33.995634400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:16:33.995701 containerd[1757]: time="2025-03-25T01:16:33.995661960Z" level=info msg="Start snapshots syncer" Mar 25 01:16:33.995701 containerd[1757]: time="2025-03-25T01:16:33.995688560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:16:33.995983 containerd[1757]: time="2025-03-25T01:16:33.995942920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:16:33.996150 containerd[1757]: time="2025-03-25T01:16:33.996000040Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:16:33.996150 containerd[1757]: time="2025-03-25T01:16:33.996082480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:16:33.996231 containerd[1757]: time="2025-03-25T01:16:33.996194040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:16:33.996231 containerd[1757]: time="2025-03-25T01:16:33.996226080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996240720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996251920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996266240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996278000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996290120Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996318960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996332680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996342520Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:16:33.996351 containerd[1757]: time="2025-03-25T01:16:33.996378360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996394040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996403960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996413960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996423000Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996436680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996447680Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996464400Z" level=info msg="runtime interface created" Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996469840Z" level=info msg="created NRI interface" Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996482320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996494880Z" level=info msg="Connect containerd service" Mar 25 01:16:33.996705 containerd[1757]: time="2025-03-25T01:16:33.996520560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:16:33.997312 containerd[1757]: time="2025-03-25T01:16:33.997283840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298193160Z" level=info msg="Start subscribing containerd event" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298495120Z" level=info msg="Start recovering state" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298646400Z" level=info msg="Start event monitor" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298669880Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298680120Z" level=info msg="Start streaming server" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298705680Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298715200Z" level=info msg="runtime interface starting up..." Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298727920Z" level=info msg="starting plugins..." Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298743080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298903440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:16:35.300628 containerd[1757]: time="2025-03-25T01:16:35.298949160Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:16:35.299326 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:16:35.302324 containerd[1757]: time="2025-03-25T01:16:35.302285200Z" level=info msg="containerd successfully booted in 1.370997s" Mar 25 01:16:35.307892 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:16:35.316279 systemd[1]: Startup finished in 678ms (kernel) + 12.980s (initrd) + 14.359s (userspace) = 28.018s. Mar 25 01:16:35.635624 login[1902]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:35.637197 login[1903]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:35.647618 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:16:35.648679 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:16:35.650868 systemd-logind[1738]: New session 2 of user core. Mar 25 01:16:35.653864 systemd-logind[1738]: New session 1 of user core. Mar 25 01:16:35.668915 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:16:35.671486 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:16:35.681448 (systemd)[1929]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:16:35.683925 systemd-logind[1738]: New session c1 of user core. Mar 25 01:16:35.848366 systemd[1929]: Queued start job for default target default.target. Mar 25 01:16:35.855992 systemd[1929]: Created slice app.slice - User Application Slice. Mar 25 01:16:35.856194 systemd[1929]: Reached target paths.target - Paths. Mar 25 01:16:35.856305 systemd[1929]: Reached target timers.target - Timers. Mar 25 01:16:35.857570 systemd[1929]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:16:35.866950 systemd[1929]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:16:35.867100 systemd[1929]: Reached target sockets.target - Sockets. Mar 25 01:16:35.867211 systemd[1929]: Reached target basic.target - Basic System. Mar 25 01:16:35.867243 systemd[1929]: Reached target default.target - Main User Target. Mar 25 01:16:35.867268 systemd[1929]: Startup finished in 177ms. Mar 25 01:16:35.867273 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:16:35.868800 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:16:35.869404 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:16:36.077662 waagent[1895]: 2025-03-25T01:16:36.077070Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 25 01:16:36.083131 waagent[1895]: 2025-03-25T01:16:36.083072Z INFO Daemon Daemon OS: flatcar 4284.0.0 Mar 25 01:16:36.087628 waagent[1895]: 2025-03-25T01:16:36.087580Z INFO Daemon Daemon Python: 3.11.11 Mar 25 01:16:36.091877 waagent[1895]: 2025-03-25T01:16:36.091834Z INFO Daemon Daemon Run daemon Mar 25 01:16:36.095957 waagent[1895]: 2025-03-25T01:16:36.095920Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' Mar 25 01:16:36.104593 waagent[1895]: 2025-03-25T01:16:36.104545Z INFO Daemon Daemon Using waagent for provisioning Mar 25 01:16:36.109922 waagent[1895]: 2025-03-25T01:16:36.109880Z INFO Daemon Daemon Activate resource disk Mar 25 01:16:36.114519 waagent[1895]: 2025-03-25T01:16:36.114481Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 25 01:16:36.125674 waagent[1895]: 2025-03-25T01:16:36.125627Z INFO Daemon Daemon Found device: None Mar 25 01:16:36.129985 waagent[1895]: 2025-03-25T01:16:36.129944Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 25 01:16:36.138318 waagent[1895]: 2025-03-25T01:16:36.138277Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 25 01:16:36.150201 waagent[1895]: 2025-03-25T01:16:36.150154Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 25 01:16:36.155985 waagent[1895]: 2025-03-25T01:16:36.155941Z INFO Daemon Daemon Running default provisioning handler Mar 25 01:16:36.167638 waagent[1895]: 2025-03-25T01:16:36.167076Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 25 01:16:36.180556 waagent[1895]: 2025-03-25T01:16:36.180509Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 25 01:16:36.190418 waagent[1895]: 2025-03-25T01:16:36.190372Z INFO Daemon Daemon cloud-init is enabled: False Mar 25 01:16:36.195287 waagent[1895]: 2025-03-25T01:16:36.195246Z INFO Daemon Daemon Copying ovf-env.xml Mar 25 01:16:36.307654 waagent[1895]: 2025-03-25T01:16:36.305841Z INFO Daemon Daemon Successfully mounted dvd Mar 25 01:16:36.334285 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 25 01:16:36.337693 waagent[1895]: 2025-03-25T01:16:36.337628Z INFO Daemon Daemon Detect protocol endpoint Mar 25 01:16:36.342532 waagent[1895]: 2025-03-25T01:16:36.342487Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 25 01:16:36.348738 waagent[1895]: 2025-03-25T01:16:36.348699Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 25 01:16:36.355189 waagent[1895]: 2025-03-25T01:16:36.355157Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 25 01:16:36.360464 waagent[1895]: 2025-03-25T01:16:36.360433Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 25 01:16:36.365361 waagent[1895]: 2025-03-25T01:16:36.365332Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 25 01:16:36.450138 waagent[1895]: 2025-03-25T01:16:36.450096Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 25 01:16:36.457142 waagent[1895]: 2025-03-25T01:16:36.457114Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 25 01:16:36.462829 waagent[1895]: 2025-03-25T01:16:36.462787Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 25 01:16:36.804683 waagent[1895]: 2025-03-25T01:16:36.804484Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 25 01:16:36.814656 waagent[1895]: 2025-03-25T01:16:36.811324Z INFO Daemon Daemon Forcing an update of the goal state. Mar 25 01:16:36.821027 waagent[1895]: 2025-03-25T01:16:36.820980Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 25 01:16:36.864542 waagent[1895]: 2025-03-25T01:16:36.864500Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 25 01:16:36.870729 waagent[1895]: 2025-03-25T01:16:36.870675Z INFO Daemon Mar 25 01:16:36.873628 waagent[1895]: 2025-03-25T01:16:36.873582Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2b929c39-ce7b-414e-8875-c2414b06977d eTag: 13213333295204907500 source: Fabric] Mar 25 01:16:36.885429 waagent[1895]: 2025-03-25T01:16:36.885389Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 25 01:16:36.892006 waagent[1895]: 2025-03-25T01:16:36.891972Z INFO Daemon Mar 25 01:16:36.894796 waagent[1895]: 2025-03-25T01:16:36.894764Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 25 01:16:36.905912 waagent[1895]: 2025-03-25T01:16:36.905879Z INFO Daemon Daemon Downloading artifacts profile blob Mar 25 01:16:36.997785 waagent[1895]: 2025-03-25T01:16:36.997700Z INFO Daemon Downloaded certificate {'thumbprint': 'CFABEE6DF053D4991E3065E7B05198D16662D210', 'hasPrivateKey': True} Mar 25 01:16:37.007903 waagent[1895]: 2025-03-25T01:16:37.007854Z INFO Daemon Downloaded certificate {'thumbprint': '390331DC9C27F8A789FCA5ED6F9EA7227A5E3AE3', 'hasPrivateKey': False} Mar 25 01:16:37.017407 waagent[1895]: 2025-03-25T01:16:37.017362Z INFO Daemon Fetch goal state completed Mar 25 01:16:37.030146 waagent[1895]: 2025-03-25T01:16:37.030088Z INFO Daemon Daemon Starting provisioning Mar 25 01:16:37.035030 waagent[1895]: 2025-03-25T01:16:37.034979Z INFO Daemon Daemon Handle ovf-env.xml. Mar 25 01:16:37.039822 waagent[1895]: 2025-03-25T01:16:37.039783Z INFO Daemon Daemon Set hostname [ci-4284.0.0-a-f1ebfb6c0b] Mar 25 01:16:37.658622 waagent[1895]: 2025-03-25T01:16:37.653592Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-a-f1ebfb6c0b] Mar 25 01:16:37.660517 waagent[1895]: 2025-03-25T01:16:37.660462Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 25 01:16:37.667280 waagent[1895]: 2025-03-25T01:16:37.667236Z INFO Daemon Daemon Primary interface is [eth0] Mar 25 01:16:37.680110 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:16:37.680118 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:16:37.680144 systemd-networkd[1452]: eth0: DHCP lease lost Mar 25 01:16:37.681625 waagent[1895]: 2025-03-25T01:16:37.681260Z INFO Daemon Daemon Create user account if not exists Mar 25 01:16:37.687151 waagent[1895]: 2025-03-25T01:16:37.687081Z INFO Daemon Daemon User core already exists, skip useradd Mar 25 01:16:37.692994 waagent[1895]: 2025-03-25T01:16:37.692954Z INFO Daemon Daemon Configure sudoer Mar 25 01:16:37.697793 waagent[1895]: 2025-03-25T01:16:37.697743Z INFO Daemon Daemon Configure sshd Mar 25 01:16:37.702286 waagent[1895]: 2025-03-25T01:16:37.702218Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 25 01:16:37.715157 waagent[1895]: 2025-03-25T01:16:37.715090Z INFO Daemon Daemon Deploy ssh public key. Mar 25 01:16:37.733674 systemd-networkd[1452]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 25 01:16:38.874628 waagent[1895]: 2025-03-25T01:16:38.873727Z INFO Daemon Daemon Provisioning complete Mar 25 01:16:38.891909 waagent[1895]: 2025-03-25T01:16:38.891871Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 25 01:16:38.898104 waagent[1895]: 2025-03-25T01:16:38.898051Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 25 01:16:38.907989 waagent[1895]: 2025-03-25T01:16:38.907941Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 25 01:16:39.039816 waagent[1984]: 2025-03-25T01:16:39.039686Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 25 01:16:40.351695 waagent[1984]: 2025-03-25T01:16:40.350930Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 Mar 25 01:16:40.351695 waagent[1984]: 2025-03-25T01:16:40.351091Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 25 01:16:40.351695 waagent[1984]: 2025-03-25T01:16:40.351152Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 25 01:16:40.718689 waagent[1984]: 2025-03-25T01:16:40.716426Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 25 01:16:40.718689 waagent[1984]: 2025-03-25T01:16:40.716700Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:16:40.718689 waagent[1984]: 2025-03-25T01:16:40.716759Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:16:40.723729 waagent[1984]: 2025-03-25T01:16:40.723671Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 25 01:16:40.729222 waagent[1984]: 2025-03-25T01:16:40.729181Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 25 01:16:40.729761 waagent[1984]: 2025-03-25T01:16:40.729728Z INFO ExtHandler Mar 25 01:16:40.729834 waagent[1984]: 2025-03-25T01:16:40.729810Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a02ebe57-554e-4d8f-900c-e64f2405140c eTag: 13213333295204907500 source: Fabric] Mar 25 01:16:40.730130 waagent[1984]: 2025-03-25T01:16:40.730080Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 25 01:16:40.730648 waagent[1984]: 2025-03-25T01:16:40.730587Z INFO ExtHandler Mar 25 01:16:40.730708 waagent[1984]: 2025-03-25T01:16:40.730685Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 25 01:16:40.734520 waagent[1984]: 2025-03-25T01:16:40.734490Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 25 01:16:40.870869 waagent[1984]: 2025-03-25T01:16:40.870791Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFABEE6DF053D4991E3065E7B05198D16662D210', 'hasPrivateKey': True} Mar 25 01:16:40.871271 waagent[1984]: 2025-03-25T01:16:40.871234Z INFO ExtHandler Downloaded certificate {'thumbprint': '390331DC9C27F8A789FCA5ED6F9EA7227A5E3AE3', 'hasPrivateKey': False} Mar 25 01:16:40.871713 waagent[1984]: 2025-03-25T01:16:40.871681Z INFO ExtHandler Fetch goal state completed Mar 25 01:16:40.887788 waagent[1984]: 2025-03-25T01:16:40.887736Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Mar 25 01:16:40.891956 waagent[1984]: 2025-03-25T01:16:40.891908Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1984 Mar 25 01:16:40.892080 waagent[1984]: 2025-03-25T01:16:40.892048Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 25 01:16:40.892365 waagent[1984]: 2025-03-25T01:16:40.892333Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 25 01:16:40.893751 waagent[1984]: 2025-03-25T01:16:40.893717Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 25 01:16:40.894154 waagent[1984]: 2025-03-25T01:16:40.894096Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 25 01:16:40.894268 waagent[1984]: 2025-03-25T01:16:40.894243Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 25 01:16:40.894875 waagent[1984]: 2025-03-25T01:16:40.894842Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 25 01:16:40.942257 waagent[1984]: 2025-03-25T01:16:40.942214Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 25 01:16:40.942453 waagent[1984]: 2025-03-25T01:16:40.942422Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 25 01:16:40.949158 waagent[1984]: 2025-03-25T01:16:40.949119Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 25 01:16:40.959826 systemd[1]: Reload requested from client PID 2004 ('systemctl') (unit waagent.service)... Mar 25 01:16:40.959840 systemd[1]: Reloading... Mar 25 01:16:41.054644 zram_generator::config[2046]: No configuration found. Mar 25 01:16:41.163407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:41.261642 systemd[1]: Reloading finished in 301 ms. Mar 25 01:16:41.279517 waagent[1984]: 2025-03-25T01:16:41.274866Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 25 01:16:41.279517 waagent[1984]: 2025-03-25T01:16:41.275007Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 25 01:16:41.529575 waagent[1984]: 2025-03-25T01:16:41.529452Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 25 01:16:41.529893 waagent[1984]: 2025-03-25T01:16:41.529830Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 25 01:16:41.530593 waagent[1984]: 2025-03-25T01:16:41.530514Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 25 01:16:41.530994 waagent[1984]: 2025-03-25T01:16:41.530907Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 25 01:16:41.531632 waagent[1984]: 2025-03-25T01:16:41.531212Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:16:41.531632 waagent[1984]: 2025-03-25T01:16:41.531287Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:16:41.531632 waagent[1984]: 2025-03-25T01:16:41.531489Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 25 01:16:41.531728 waagent[1984]: 2025-03-25T01:16:41.531691Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 25 01:16:41.531728 waagent[1984]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 25 01:16:41.531728 waagent[1984]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 25 01:16:41.531728 waagent[1984]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 25 01:16:41.531728 waagent[1984]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:16:41.531728 waagent[1984]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:16:41.531728 waagent[1984]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 25 01:16:41.532281 waagent[1984]: 2025-03-25T01:16:41.532013Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 25 01:16:41.532281 waagent[1984]: 2025-03-25T01:16:41.532141Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 25 01:16:41.532468 waagent[1984]: 2025-03-25T01:16:41.532411Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 25 01:16:41.532679 waagent[1984]: 2025-03-25T01:16:41.532636Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 25 01:16:41.532813 waagent[1984]: 2025-03-25T01:16:41.532759Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 25 01:16:41.532943 waagent[1984]: 2025-03-25T01:16:41.532843Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 25 01:16:41.533911 waagent[1984]: 2025-03-25T01:16:41.533870Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 25 01:16:41.534817 waagent[1984]: 2025-03-25T01:16:41.534761Z INFO EnvHandler ExtHandler Configure routes Mar 25 01:16:41.535483 waagent[1984]: 2025-03-25T01:16:41.535305Z INFO EnvHandler ExtHandler Gateway:None Mar 25 01:16:41.535483 waagent[1984]: 2025-03-25T01:16:41.535423Z INFO EnvHandler ExtHandler Routes:None Mar 25 01:16:41.542432 waagent[1984]: 2025-03-25T01:16:41.542397Z INFO ExtHandler ExtHandler Mar 25 01:16:41.542583 waagent[1984]: 2025-03-25T01:16:41.542559Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5f01de46-da88-4343-913f-e944151f562b correlation a858468c-4f12-4c0f-ae07-11638ec55349 created: 2025-03-25T01:15:15.894386Z] Mar 25 01:16:41.543007 waagent[1984]: 2025-03-25T01:16:41.542978Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 25 01:16:41.544279 waagent[1984]: 2025-03-25T01:16:41.543638Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 25 01:16:41.570550 waagent[1984]: 2025-03-25T01:16:41.570494Z INFO MonitorHandler ExtHandler Network interfaces: Mar 25 01:16:41.570550 waagent[1984]: Executing ['ip', '-a', '-o', 'link']: Mar 25 01:16:41.570550 waagent[1984]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 25 01:16:41.570550 waagent[1984]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:cd:00 brd ff:ff:ff:ff:ff:ff Mar 25 01:16:41.570550 waagent[1984]: 3: enP27637s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:cd:00 brd ff:ff:ff:ff:ff:ff\ altname enP27637p0s2 Mar 25 01:16:41.570550 waagent[1984]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 25 01:16:41.570550 waagent[1984]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 25 01:16:41.570550 waagent[1984]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 25 01:16:41.570550 waagent[1984]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 25 01:16:41.570550 waagent[1984]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 25 01:16:41.570550 waagent[1984]: 2: eth0 inet6 fe80::20d:3aff:fe06:cd00/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 25 01:16:41.570550 waagent[1984]: 3: enP27637s1 inet6 fe80::20d:3aff:fe06:cd00/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 25 01:16:41.586159 waagent[1984]: 2025-03-25T01:16:41.586115Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C84651FF-26CD-4FC4-AF24-C344B1A8F027;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 25 01:16:41.649641 waagent[1984]: 2025-03-25T01:16:41.649481Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 25 01:16:41.649641 waagent[1984]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:16:41.649641 waagent[1984]: pkts bytes target prot opt in out source destination Mar 25 01:16:41.649641 waagent[1984]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:16:41.649641 waagent[1984]: pkts bytes target prot opt in out source destination Mar 25 01:16:41.649641 waagent[1984]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:16:41.649641 waagent[1984]: pkts bytes target prot opt in out source destination Mar 25 01:16:41.649641 waagent[1984]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 25 01:16:41.649641 waagent[1984]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 25 01:16:41.649641 waagent[1984]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 25 01:16:41.652269 waagent[1984]: 2025-03-25T01:16:41.652220Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 25 01:16:41.652269 waagent[1984]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:16:41.652269 waagent[1984]: pkts bytes target prot opt in out source destination Mar 25 01:16:41.652269 waagent[1984]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:16:41.652269 waagent[1984]: pkts bytes target prot opt in out source destination Mar 25 01:16:41.652269 waagent[1984]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 25 01:16:41.652269 waagent[1984]: pkts bytes target prot opt in out source destination Mar 25 01:16:41.652269 waagent[1984]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 25 01:16:41.652269 waagent[1984]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 25 01:16:41.652269 waagent[1984]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 25 01:16:41.652480 waagent[1984]: 2025-03-25T01:16:41.652450Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 25 01:16:43.892710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:16:43.894126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:45.764865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:45.774916 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:16:45.810718 kubelet[2138]: E0325 01:16:45.810662 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:16:45.814392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:16:45.814893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:16:45.816282 systemd[1]: kubelet.service: Consumed 136ms CPU time, 100.9M memory peak. Mar 25 01:16:55.717296 chronyd[1736]: Selected source PHC0 Mar 25 01:16:56.065100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:16:56.066687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:56.308737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:56.318888 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:16:56.357380 kubelet[2153]: E0325 01:16:56.357327 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:16:56.359822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:16:56.360076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:16:56.360576 systemd[1]: kubelet.service: Consumed 135ms CPU time, 102.4M memory peak. Mar 25 01:17:02.890542 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:17:02.891997 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:50524.service - OpenSSH per-connection server daemon (10.200.16.10:50524). Mar 25 01:17:03.514855 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 50524 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:03.516233 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:03.520258 systemd-logind[1738]: New session 3 of user core. Mar 25 01:17:03.527773 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:17:03.953300 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:50536.service - OpenSSH per-connection server daemon (10.200.16.10:50536). Mar 25 01:17:04.416177 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 50536 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:04.417542 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:04.422102 systemd-logind[1738]: New session 4 of user core. Mar 25 01:17:04.427779 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:17:04.738719 sshd[2168]: Connection closed by 10.200.16.10 port 50536 Mar 25 01:17:04.739352 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:04.742953 systemd-logind[1738]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:17:04.743106 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:50536.service: Deactivated successfully. Mar 25 01:17:04.745283 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:17:04.746802 systemd-logind[1738]: Removed session 4. Mar 25 01:17:04.827868 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:50552.service - OpenSSH per-connection server daemon (10.200.16.10:50552). Mar 25 01:17:05.321011 sshd[2174]: Accepted publickey for core from 10.200.16.10 port 50552 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:05.322438 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:05.328126 systemd-logind[1738]: New session 5 of user core. Mar 25 01:17:05.333765 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:17:05.675201 sshd[2176]: Connection closed by 10.200.16.10 port 50552 Mar 25 01:17:05.674623 sshd-session[2174]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:05.678153 systemd-logind[1738]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:17:05.679109 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:50552.service: Deactivated successfully. Mar 25 01:17:05.680643 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:17:05.681465 systemd-logind[1738]: Removed session 5. Mar 25 01:17:05.760976 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:50566.service - OpenSSH per-connection server daemon (10.200.16.10:50566). Mar 25 01:17:06.249582 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 50566 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:06.251156 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:06.256477 systemd-logind[1738]: New session 6 of user core. Mar 25 01:17:06.264792 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:17:06.522165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 25 01:17:06.525772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:06.612449 sshd[2184]: Connection closed by 10.200.16.10 port 50566 Mar 25 01:17:06.612981 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:06.616228 systemd-logind[1738]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:17:06.618739 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:50566.service: Deactivated successfully. Mar 25 01:17:06.622067 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:17:06.625193 systemd-logind[1738]: Removed session 6. Mar 25 01:17:06.645405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:06.652218 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:06.692846 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:50572.service - OpenSSH per-connection server daemon (10.200.16.10:50572). Mar 25 01:17:06.729548 kubelet[2197]: E0325 01:17:06.729508 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:06.731803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:06.732034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:06.732574 systemd[1]: kubelet.service: Consumed 138ms CPU time, 101.1M memory peak. Mar 25 01:17:07.154958 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 50572 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:07.156285 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:07.161868 systemd-logind[1738]: New session 7 of user core. Mar 25 01:17:07.170741 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:17:07.638723 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:17:07.638989 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:07.669410 sudo[2208]: pam_unix(sudo:session): session closed for user root Mar 25 01:17:07.753304 sshd[2207]: Connection closed by 10.200.16.10 port 50572 Mar 25 01:17:07.753999 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:07.757773 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:50572.service: Deactivated successfully. Mar 25 01:17:07.759511 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:17:07.761236 systemd-logind[1738]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:17:07.762368 systemd-logind[1738]: Removed session 7. Mar 25 01:17:07.840149 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:50576.service - OpenSSH per-connection server daemon (10.200.16.10:50576). Mar 25 01:17:08.295697 sshd[2214]: Accepted publickey for core from 10.200.16.10 port 50576 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:08.297138 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:08.302718 systemd-logind[1738]: New session 8 of user core. Mar 25 01:17:08.309817 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:17:08.549270 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:17:08.549587 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:08.552885 sudo[2218]: pam_unix(sudo:session): session closed for user root Mar 25 01:17:08.557240 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:17:08.557494 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:08.565646 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:17:08.600258 augenrules[2240]: No rules Mar 25 01:17:08.601449 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:17:08.601713 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:17:08.604257 sudo[2217]: pam_unix(sudo:session): session closed for user root Mar 25 01:17:08.688215 sshd[2216]: Connection closed by 10.200.16.10 port 50576 Mar 25 01:17:08.689809 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:08.692405 systemd-logind[1738]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:17:08.692676 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:50576.service: Deactivated successfully. Mar 25 01:17:08.694248 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:17:08.697183 systemd-logind[1738]: Removed session 8. Mar 25 01:17:08.774731 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:50578.service - OpenSSH per-connection server daemon (10.200.16.10:50578). Mar 25 01:17:09.264579 sshd[2249]: Accepted publickey for core from 10.200.16.10 port 50578 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:17:09.265801 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:09.270657 systemd-logind[1738]: New session 9 of user core. Mar 25 01:17:09.275735 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:17:09.535391 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:17:09.535667 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:17:10.581995 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:17:10.593857 (dockerd)[2271]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:17:11.394681 dockerd[2271]: time="2025-03-25T01:17:11.394275175Z" level=info msg="Starting up" Mar 25 01:17:11.396169 dockerd[2271]: time="2025-03-25T01:17:11.396133334Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:17:11.476101 dockerd[2271]: time="2025-03-25T01:17:11.475983781Z" level=info msg="Loading containers: start." Mar 25 01:17:11.679167 kernel: Initializing XFRM netlink socket Mar 25 01:17:11.778185 systemd-networkd[1452]: docker0: Link UP Mar 25 01:17:11.854920 dockerd[2271]: time="2025-03-25T01:17:11.854862379Z" level=info msg="Loading containers: done." Mar 25 01:17:11.907076 dockerd[2271]: time="2025-03-25T01:17:11.907024797Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:17:11.907253 dockerd[2271]: time="2025-03-25T01:17:11.907130557Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:17:11.907281 dockerd[2271]: time="2025-03-25T01:17:11.907259237Z" level=info msg="Daemon has completed initialization" Mar 25 01:17:11.967037 dockerd[2271]: time="2025-03-25T01:17:11.966886691Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:17:11.967420 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:17:12.640545 containerd[1757]: time="2025-03-25T01:17:12.640489240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 25 01:17:13.511759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46089482.mount: Deactivated successfully. Mar 25 01:17:14.976938 containerd[1757]: time="2025-03-25T01:17:14.976884990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:14.979384 containerd[1757]: time="2025-03-25T01:17:14.979333949Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231950" Mar 25 01:17:14.983504 containerd[1757]: time="2025-03-25T01:17:14.983475107Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:14.990881 containerd[1757]: time="2025-03-25T01:17:14.990808584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:14.991761 containerd[1757]: time="2025-03-25T01:17:14.991583784Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 2.351036784s" Mar 25 01:17:14.991761 containerd[1757]: time="2025-03-25T01:17:14.991637024Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 25 01:17:14.992510 containerd[1757]: time="2025-03-25T01:17:14.992245383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 25 01:17:16.240634 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 25 01:17:16.465184 containerd[1757]: time="2025-03-25T01:17:16.465129067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:16.468937 containerd[1757]: time="2025-03-25T01:17:16.468614786Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530032" Mar 25 01:17:16.473835 containerd[1757]: time="2025-03-25T01:17:16.473787103Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:16.480347 containerd[1757]: time="2025-03-25T01:17:16.480288660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:16.481506 containerd[1757]: time="2025-03-25T01:17:16.481131740Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.488852637s" Mar 25 01:17:16.481506 containerd[1757]: time="2025-03-25T01:17:16.481168140Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 25 01:17:16.481749 containerd[1757]: time="2025-03-25T01:17:16.481727500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 25 01:17:16.820732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 25 01:17:16.822182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:16.946156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:16.954951 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:17.016966 kubelet[2534]: E0325 01:17:17.016824 2534 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:17.019321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:17.019476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:17.019878 systemd[1]: kubelet.service: Consumed 164ms CPU time, 104.4M memory peak. Mar 25 01:17:17.341919 update_engine[1740]: I20250325 01:17:17.303443 1740 update_attempter.cc:509] Updating boot flags... Mar 25 01:17:17.390783 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2556) Mar 25 01:17:17.507692 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2557) Mar 25 01:17:18.380746 containerd[1757]: time="2025-03-25T01:17:18.380691039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:18.384211 containerd[1757]: time="2025-03-25T01:17:18.383937758Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482561" Mar 25 01:17:18.387405 containerd[1757]: time="2025-03-25T01:17:18.387351316Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:18.392614 containerd[1757]: time="2025-03-25T01:17:18.392554154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:18.393557 containerd[1757]: time="2025-03-25T01:17:18.393429874Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.911611854s" Mar 25 01:17:18.393557 containerd[1757]: time="2025-03-25T01:17:18.393464634Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 25 01:17:18.394027 containerd[1757]: time="2025-03-25T01:17:18.393864794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 25 01:17:19.554798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846201521.mount: Deactivated successfully. Mar 25 01:17:19.910637 containerd[1757]: time="2025-03-25T01:17:19.910445540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:19.914109 containerd[1757]: time="2025-03-25T01:17:19.913913298Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370095" Mar 25 01:17:19.917412 containerd[1757]: time="2025-03-25T01:17:19.917359217Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:19.921994 containerd[1757]: time="2025-03-25T01:17:19.921934455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:19.922652 containerd[1757]: time="2025-03-25T01:17:19.922470215Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.528575381s" Mar 25 01:17:19.922652 containerd[1757]: time="2025-03-25T01:17:19.922506375Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 25 01:17:19.923001 containerd[1757]: time="2025-03-25T01:17:19.922966174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 25 01:17:20.635394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665313040.mount: Deactivated successfully. Mar 25 01:17:21.706409 containerd[1757]: time="2025-03-25T01:17:21.706363060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:21.709165 containerd[1757]: time="2025-03-25T01:17:21.709085939Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Mar 25 01:17:21.711893 containerd[1757]: time="2025-03-25T01:17:21.711822897Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:21.718654 containerd[1757]: time="2025-03-25T01:17:21.718572814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:21.719621 containerd[1757]: time="2025-03-25T01:17:21.719477094Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.79647344s" Mar 25 01:17:21.719621 containerd[1757]: time="2025-03-25T01:17:21.719512974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 25 01:17:21.720325 containerd[1757]: time="2025-03-25T01:17:21.720094014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 01:17:22.262191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799579190.mount: Deactivated successfully. Mar 25 01:17:22.287635 containerd[1757]: time="2025-03-25T01:17:22.287518014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:22.292309 containerd[1757]: time="2025-03-25T01:17:22.292247852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 25 01:17:22.296629 containerd[1757]: time="2025-03-25T01:17:22.296549130Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:22.306764 containerd[1757]: time="2025-03-25T01:17:22.306702086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:17:22.307424 containerd[1757]: time="2025-03-25T01:17:22.307077245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 586.952351ms" Mar 25 01:17:22.307424 containerd[1757]: time="2025-03-25T01:17:22.307123005Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 25 01:17:22.307648 containerd[1757]: time="2025-03-25T01:17:22.307581245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 25 01:17:23.028240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505828955.mount: Deactivated successfully. Mar 25 01:17:27.070743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 25 01:17:27.072742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:27.182740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:27.192872 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:27.230757 kubelet[2736]: E0325 01:17:27.230632 2736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:27.232348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:27.232494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:27.232900 systemd[1]: kubelet.service: Consumed 133ms CPU time, 99.4M memory peak. Mar 25 01:17:37.321026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 25 01:17:37.322634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:38.862200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:38.871912 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:17:38.909083 kubelet[2763]: E0325 01:17:38.909027 2763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:17:38.911924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:17:38.912058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:17:38.913684 systemd[1]: kubelet.service: Consumed 134ms CPU time, 104.4M memory peak. Mar 25 01:17:43.000656 containerd[1757]: time="2025-03-25T01:17:42.999703729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:43.042315 containerd[1757]: time="2025-03-25T01:17:43.042235751Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Mar 25 01:17:43.045376 containerd[1757]: time="2025-03-25T01:17:43.045320309Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:43.105432 containerd[1757]: time="2025-03-25T01:17:43.105370124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:43.106539 containerd[1757]: time="2025-03-25T01:17:43.106401884Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 20.798767639s" Mar 25 01:17:43.106539 containerd[1757]: time="2025-03-25T01:17:43.106439364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 25 01:17:47.244672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:47.244814 systemd[1]: kubelet.service: Consumed 134ms CPU time, 104.4M memory peak. Mar 25 01:17:47.247076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:47.279158 systemd[1]: Reload requested from client PID 2835 ('systemctl') (unit session-9.scope)... Mar 25 01:17:47.279299 systemd[1]: Reloading... Mar 25 01:17:47.394638 zram_generator::config[2879]: No configuration found. Mar 25 01:17:47.498736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:17:47.600488 systemd[1]: Reloading finished in 320 ms. Mar 25 01:17:47.651778 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:47.655122 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:17:47.656644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:47.656693 systemd[1]: kubelet.service: Consumed 97ms CPU time, 90.2M memory peak. Mar 25 01:17:47.658093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:17:51.430530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:17:51.440869 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:17:51.479118 kubelet[2951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:17:51.479118 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 25 01:17:51.479118 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:17:51.480848 kubelet[2951]: I0325 01:17:51.479237 2951 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:17:52.014797 kubelet[2951]: I0325 01:17:52.014748 2951 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 25 01:17:52.014797 kubelet[2951]: I0325 01:17:52.014781 2951 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:17:52.015223 kubelet[2951]: I0325 01:17:52.015195 2951 server.go:954] "Client rotation is on, will bootstrap in background" Mar 25 01:17:52.032390 kubelet[2951]: E0325 01:17:52.032354 2951 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:52.033906 kubelet[2951]: I0325 01:17:52.033886 2951 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:17:52.040825 kubelet[2951]: I0325 01:17:52.040799 2951 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:17:52.043759 kubelet[2951]: I0325 01:17:52.043737 2951 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:17:52.044568 kubelet[2951]: I0325 01:17:52.044532 2951 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:17:52.044751 kubelet[2951]: I0325 01:17:52.044569 2951 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-f1ebfb6c0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:17:52.044839 kubelet[2951]: I0325 01:17:52.044760 2951 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:17:52.044839 kubelet[2951]: I0325 01:17:52.044770 2951 container_manager_linux.go:304] "Creating device plugin manager" Mar 25 01:17:52.044922 kubelet[2951]: I0325 01:17:52.044901 2951 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:17:52.047689 kubelet[2951]: I0325 01:17:52.047671 2951 kubelet.go:446] "Attempting to sync node with API server" Mar 25 01:17:52.047736 kubelet[2951]: I0325 01:17:52.047696 2951 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:17:52.047736 kubelet[2951]: I0325 01:17:52.047717 2951 kubelet.go:352] "Adding apiserver pod source" Mar 25 01:17:52.047736 kubelet[2951]: I0325 01:17:52.047731 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:17:52.053768 kubelet[2951]: W0325 01:17:52.053244 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:52.053768 kubelet[2951]: E0325 01:17:52.053295 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:52.053768 kubelet[2951]: W0325 01:17:52.053358 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-f1ebfb6c0b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:52.053768 kubelet[2951]: E0325 01:17:52.053380 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-f1ebfb6c0b&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:52.053768 kubelet[2951]: I0325 01:17:52.053457 2951 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:17:52.053949 kubelet[2951]: I0325 01:17:52.053919 2951 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:17:52.054009 kubelet[2951]: W0325 01:17:52.053970 2951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:17:52.054717 kubelet[2951]: I0325 01:17:52.054698 2951 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 25 01:17:52.054771 kubelet[2951]: I0325 01:17:52.054729 2951 server.go:1287] "Started kubelet" Mar 25 01:17:52.055938 kubelet[2951]: I0325 01:17:52.055901 2951 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:17:52.060537 kubelet[2951]: I0325 01:17:52.060079 2951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:17:52.060537 kubelet[2951]: I0325 01:17:52.060365 2951 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:17:52.060785 kubelet[2951]: I0325 01:17:52.060762 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:17:52.062617 kubelet[2951]: E0325 01:17:52.062269 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-a-f1ebfb6c0b.182fe6e9a91d5d49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-f1ebfb6c0b,UID:ci-4284.0.0-a-f1ebfb6c0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-f1ebfb6c0b,},FirstTimestamp:2025-03-25 01:17:52.054713673 +0000 UTC m=+0.609350127,LastTimestamp:2025-03-25 01:17:52.054713673 +0000 UTC m=+0.609350127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-f1ebfb6c0b,}" Mar 25 01:17:52.064352 kubelet[2951]: I0325 01:17:52.064268 2951 server.go:490] "Adding debug handlers to kubelet server" Mar 25 01:17:52.065761 kubelet[2951]: I0325 01:17:52.065738 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:17:52.066390 kubelet[2951]: I0325 01:17:52.066375 2951 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 25 01:17:52.068593 kubelet[2951]: I0325 01:17:52.066509 2951 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:17:52.068593 kubelet[2951]: E0325 01:17:52.066676 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.068710 kubelet[2951]: I0325 01:17:52.068658 2951 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:17:52.069045 kubelet[2951]: W0325 01:17:52.069008 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:52.069150 kubelet[2951]: E0325 01:17:52.069124 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:52.069273 kubelet[2951]: E0325 01:17:52.069253 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-f1ebfb6c0b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Mar 25 01:17:52.070808 kubelet[2951]: I0325 01:17:52.070787 2951 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:17:52.070964 kubelet[2951]: I0325 01:17:52.070947 2951 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:17:52.071974 kubelet[2951]: E0325 01:17:52.071956 2951 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:17:52.072267 kubelet[2951]: I0325 01:17:52.072255 2951 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:17:52.079438 kubelet[2951]: I0325 01:17:52.079390 2951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:17:52.080279 kubelet[2951]: I0325 01:17:52.080250 2951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:17:52.080279 kubelet[2951]: I0325 01:17:52.080270 2951 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 25 01:17:52.080351 kubelet[2951]: I0325 01:17:52.080291 2951 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 25 01:17:52.080351 kubelet[2951]: I0325 01:17:52.080298 2951 kubelet.go:2388] "Starting kubelet main sync loop" Mar 25 01:17:52.080351 kubelet[2951]: E0325 01:17:52.080330 2951 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:17:52.086556 kubelet[2951]: W0325 01:17:52.086467 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:52.086556 kubelet[2951]: E0325 01:17:52.086550 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:52.101631 kubelet[2951]: I0325 01:17:52.101575 2951 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 25 01:17:52.102003 kubelet[2951]: I0325 01:17:52.101804 2951 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 25 01:17:52.102003 kubelet[2951]: I0325 01:17:52.101847 2951 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:17:52.169048 kubelet[2951]: E0325 01:17:52.169007 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.181115 kubelet[2951]: E0325 01:17:52.181091 2951 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:17:52.269728 kubelet[2951]: E0325 01:17:52.269427 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.270330 kubelet[2951]: E0325 01:17:52.269909 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-f1ebfb6c0b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Mar 25 01:17:52.370250 kubelet[2951]: E0325 01:17:52.370209 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.381543 kubelet[2951]: E0325 01:17:52.381524 2951 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:17:52.470973 kubelet[2951]: E0325 01:17:52.470934 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.571644 kubelet[2951]: E0325 01:17:52.571617 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.670358 kubelet[2951]: E0325 01:17:52.670322 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-f1ebfb6c0b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Mar 25 01:17:52.672682 kubelet[2951]: E0325 01:17:52.672647 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.773092 kubelet[2951]: E0325 01:17:52.773060 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.782258 kubelet[2951]: E0325 01:17:52.782235 2951 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:17:52.873855 kubelet[2951]: E0325 01:17:52.873769 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:52.954304 kubelet[2951]: W0325 01:17:52.954250 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-f1ebfb6c0b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:52.954363 kubelet[2951]: E0325 01:17:52.954316 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-f1ebfb6c0b&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:52.963830 kubelet[2951]: W0325 01:17:52.963807 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:52.963886 kubelet[2951]: E0325 01:17:52.963841 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:52.974482 kubelet[2951]: E0325 01:17:52.974462 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:53.074741 kubelet[2951]: E0325 01:17:53.074711 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:53.175138 kubelet[2951]: E0325 01:17:53.175023 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:53.275623 kubelet[2951]: E0325 01:17:53.275491 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:53.276130 kubelet[2951]: W0325 01:17:53.276071 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:53.276130 kubelet[2951]: E0325 01:17:53.276106 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:53.283778 kubelet[2951]: W0325 01:17:53.283701 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:53.283778 kubelet[2951]: E0325 01:17:53.283755 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.376338 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.471008 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-f1ebfb6c0b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.477184 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.577952 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.583096 2951 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.678559 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.778947 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.879455 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:53.979866 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802558 kubelet[2951]: E0325 01:17:54.080155 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.173480 2951 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.180728 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.281103 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.381562 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.482005 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.582768 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.683178 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.802805 kubelet[2951]: E0325 01:17:54.783632 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.884109 kubelet[2951]: E0325 01:17:54.884074 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:54.984583 kubelet[2951]: E0325 01:17:54.984555 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:55.150153 kubelet[2951]: E0325 01:17:55.072000 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-f1ebfb6c0b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="3.2s" Mar 25 01:17:55.150153 kubelet[2951]: E0325 01:17:55.085091 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:55.154691 kubelet[2951]: I0325 01:17:55.154654 2951 policy_none.go:49] "None policy: Start" Mar 25 01:17:55.154691 kubelet[2951]: I0325 01:17:55.154684 2951 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 25 01:17:55.154691 kubelet[2951]: I0325 01:17:55.154697 2951 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:17:55.166511 kubelet[2951]: W0325 01:17:55.166447 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:55.166511 kubelet[2951]: E0325 01:17:55.166486 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:55.172037 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:17:55.183358 kubelet[2951]: E0325 01:17:55.183325 2951 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:17:55.185326 kubelet[2951]: E0325 01:17:55.185301 2951 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:55.185753 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:17:55.199146 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:17:55.200667 kubelet[2951]: I0325 01:17:55.200634 2951 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:17:55.200882 kubelet[2951]: I0325 01:17:55.200855 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:17:55.200926 kubelet[2951]: I0325 01:17:55.200876 2951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:17:55.201519 kubelet[2951]: I0325 01:17:55.201223 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:17:55.202328 kubelet[2951]: E0325 01:17:55.202281 2951 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 25 01:17:55.202407 kubelet[2951]: E0325 01:17:55.202339 2951 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:17:55.209349 kubelet[2951]: W0325 01:17:55.209271 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:55.209349 kubelet[2951]: E0325 01:17:55.209315 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:55.303351 kubelet[2951]: I0325 01:17:55.303251 2951 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:55.303755 kubelet[2951]: E0325 01:17:55.303723 2951 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:55.355184 kubelet[2951]: W0325 01:17:55.355118 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-f1ebfb6c0b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:55.355184 kubelet[2951]: E0325 01:17:55.355157 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-f1ebfb6c0b&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:55.505885 kubelet[2951]: I0325 01:17:55.505762 2951 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:55.506582 kubelet[2951]: E0325 01:17:55.506172 2951 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:55.527214 kubelet[2951]: E0325 01:17:55.527101 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-a-f1ebfb6c0b.182fe6e9a91d5d49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-f1ebfb6c0b,UID:ci-4284.0.0-a-f1ebfb6c0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-f1ebfb6c0b,},FirstTimestamp:2025-03-25 01:17:52.054713673 +0000 UTC m=+0.609350127,LastTimestamp:2025-03-25 01:17:52.054713673 +0000 UTC m=+0.609350127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-f1ebfb6c0b,}" Mar 25 01:17:55.908726 kubelet[2951]: I0325 01:17:55.908646 2951 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:55.909115 kubelet[2951]: E0325 01:17:55.908989 2951 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:56.452500 kubelet[2951]: W0325 01:17:56.452467 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:56.452749 kubelet[2951]: E0325 01:17:56.452522 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:56.711684 kubelet[2951]: I0325 01:17:56.711557 2951 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:56.712010 kubelet[2951]: E0325 01:17:56.711938 2951 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.202408 kubelet[2951]: E0325 01:17:58.202367 2951 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:58.273102 kubelet[2951]: E0325 01:17:58.273062 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-f1ebfb6c0b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="6.4s" Mar 25 01:17:58.314509 kubelet[2951]: I0325 01:17:58.314123 2951 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.314509 kubelet[2951]: E0325 01:17:58.314418 2951 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.393634 systemd[1]: Created slice kubepods-burstable-pod092b551cbbb0928fd47f7c693764ad9f.slice - libcontainer container kubepods-burstable-pod092b551cbbb0928fd47f7c693764ad9f.slice. Mar 25 01:17:58.407473 kubelet[2951]: E0325 01:17:58.407201 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.411327 systemd[1]: Created slice kubepods-burstable-pod22b1c7504578d0b88b7a79abd4305a43.slice - libcontainer container kubepods-burstable-pod22b1c7504578d0b88b7a79abd4305a43.slice. Mar 25 01:17:58.419149 kubelet[2951]: E0325 01:17:58.419092 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.422120 systemd[1]: Created slice kubepods-burstable-podffdcf284c1693bfb0b69b0490ef46c09.slice - libcontainer container kubepods-burstable-podffdcf284c1693bfb0b69b0490ef46c09.slice. Mar 25 01:17:58.423865 kubelet[2951]: E0325 01:17:58.423836 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502268 kubelet[2951]: I0325 01:17:58.502080 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092b551cbbb0928fd47f7c693764ad9f-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"092b551cbbb0928fd47f7c693764ad9f\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502268 kubelet[2951]: I0325 01:17:58.502116 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092b551cbbb0928fd47f7c693764ad9f-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"092b551cbbb0928fd47f7c693764ad9f\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502268 kubelet[2951]: I0325 01:17:58.502138 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502268 kubelet[2951]: I0325 01:17:58.502153 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502268 kubelet[2951]: I0325 01:17:58.502176 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502488 kubelet[2951]: I0325 01:17:58.502195 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ffdcf284c1693bfb0b69b0490ef46c09-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"ffdcf284c1693bfb0b69b0490ef46c09\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502488 kubelet[2951]: I0325 01:17:58.502211 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092b551cbbb0928fd47f7c693764ad9f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"092b551cbbb0928fd47f7c693764ad9f\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502488 kubelet[2951]: I0325 01:17:58.502227 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.502488 kubelet[2951]: I0325 01:17:58.502242 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:17:58.607623 kubelet[2951]: W0325 01:17:58.607511 2951 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Mar 25 01:17:58.607623 kubelet[2951]: E0325 01:17:58.607553 2951 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:17:58.714021 containerd[1757]: time="2025-03-25T01:17:58.713983072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b,Uid:092b551cbbb0928fd47f7c693764ad9f,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:58.720820 containerd[1757]: time="2025-03-25T01:17:58.720779750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b,Uid:22b1c7504578d0b88b7a79abd4305a43,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:58.725698 containerd[1757]: time="2025-03-25T01:17:58.725662308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b,Uid:ffdcf284c1693bfb0b69b0490ef46c09,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:58.830641 containerd[1757]: time="2025-03-25T01:17:58.830491240Z" level=info msg="connecting to shim 8fb3fc3394b7dcac73791463cd4c4087433a5160e721c7a278c064d5e151f4dc" address="unix:///run/containerd/s/3444b5b56be7ef95afb73d2f7bc428fa12fd547724a95a61c2a226d6f9160ece" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:58.841069 containerd[1757]: time="2025-03-25T01:17:58.840962197Z" level=info msg="connecting to shim 4b02454937af729e58f466ad5eca52a01ff48699978199fa7ef92442d187ad5e" address="unix:///run/containerd/s/0cd2f2f92e5b41a2d769f2c7274ca6e68cfb2d1f1e7d22e424dad0686a748bea" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:58.862467 containerd[1757]: time="2025-03-25T01:17:58.862000232Z" level=info msg="connecting to shim b80a22a422189996436d8ccc9afb9f355ec2141b7f144d01b8638ec8a797cd36" address="unix:///run/containerd/s/15af002f17d8e16d9addacbf7d4d311c248b9fdbe73c82f7806349f7b7d74e9c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:58.872932 systemd[1]: Started cri-containerd-8fb3fc3394b7dcac73791463cd4c4087433a5160e721c7a278c064d5e151f4dc.scope - libcontainer container 8fb3fc3394b7dcac73791463cd4c4087433a5160e721c7a278c064d5e151f4dc. Mar 25 01:17:58.887763 systemd[1]: Started cri-containerd-4b02454937af729e58f466ad5eca52a01ff48699978199fa7ef92442d187ad5e.scope - libcontainer container 4b02454937af729e58f466ad5eca52a01ff48699978199fa7ef92442d187ad5e. Mar 25 01:17:58.893626 systemd[1]: Started cri-containerd-b80a22a422189996436d8ccc9afb9f355ec2141b7f144d01b8638ec8a797cd36.scope - libcontainer container b80a22a422189996436d8ccc9afb9f355ec2141b7f144d01b8638ec8a797cd36. Mar 25 01:17:58.942449 containerd[1757]: time="2025-03-25T01:17:58.942172690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b,Uid:22b1c7504578d0b88b7a79abd4305a43,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fb3fc3394b7dcac73791463cd4c4087433a5160e721c7a278c064d5e151f4dc\"" Mar 25 01:17:58.948284 containerd[1757]: time="2025-03-25T01:17:58.948247128Z" level=info msg="CreateContainer within sandbox \"8fb3fc3394b7dcac73791463cd4c4087433a5160e721c7a278c064d5e151f4dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:17:58.951007 containerd[1757]: time="2025-03-25T01:17:58.950869607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b,Uid:092b551cbbb0928fd47f7c693764ad9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b02454937af729e58f466ad5eca52a01ff48699978199fa7ef92442d187ad5e\"" Mar 25 01:17:58.954441 containerd[1757]: time="2025-03-25T01:17:58.953814647Z" level=info msg="CreateContainer within sandbox \"4b02454937af729e58f466ad5eca52a01ff48699978199fa7ef92442d187ad5e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:17:58.971476 containerd[1757]: time="2025-03-25T01:17:58.971443042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b,Uid:ffdcf284c1693bfb0b69b0490ef46c09,Namespace:kube-system,Attempt:0,} returns sandbox id \"b80a22a422189996436d8ccc9afb9f355ec2141b7f144d01b8638ec8a797cd36\"" Mar 25 01:17:58.973532 containerd[1757]: time="2025-03-25T01:17:58.973486881Z" level=info msg="CreateContainer within sandbox \"b80a22a422189996436d8ccc9afb9f355ec2141b7f144d01b8638ec8a797cd36\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:17:58.985952 containerd[1757]: time="2025-03-25T01:17:58.985906478Z" level=info msg="Container f6e3b26603edac2cc4284b72ec6fc7b817405bbe27c003a50c23107e9f2c6971: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:59.007751 containerd[1757]: time="2025-03-25T01:17:59.007708072Z" level=info msg="Container 7ba14c2e59c7d820ec3aec927cd74df912fdd7ecb55b2d2ecf29ab470b3173d3: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:59.025706 containerd[1757]: time="2025-03-25T01:17:59.025659107Z" level=info msg="CreateContainer within sandbox \"8fb3fc3394b7dcac73791463cd4c4087433a5160e721c7a278c064d5e151f4dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f6e3b26603edac2cc4284b72ec6fc7b817405bbe27c003a50c23107e9f2c6971\"" Mar 25 01:17:59.027499 containerd[1757]: time="2025-03-25T01:17:59.026274707Z" level=info msg="StartContainer for \"f6e3b26603edac2cc4284b72ec6fc7b817405bbe27c003a50c23107e9f2c6971\"" Mar 25 01:17:59.027499 containerd[1757]: time="2025-03-25T01:17:59.027447467Z" level=info msg="connecting to shim f6e3b26603edac2cc4284b72ec6fc7b817405bbe27c003a50c23107e9f2c6971" address="unix:///run/containerd/s/3444b5b56be7ef95afb73d2f7bc428fa12fd547724a95a61c2a226d6f9160ece" protocol=ttrpc version=3 Mar 25 01:17:59.044461 containerd[1757]: time="2025-03-25T01:17:59.044174622Z" level=info msg="CreateContainer within sandbox \"4b02454937af729e58f466ad5eca52a01ff48699978199fa7ef92442d187ad5e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7ba14c2e59c7d820ec3aec927cd74df912fdd7ecb55b2d2ecf29ab470b3173d3\"" Mar 25 01:17:59.045621 containerd[1757]: time="2025-03-25T01:17:59.044997742Z" level=info msg="StartContainer for \"7ba14c2e59c7d820ec3aec927cd74df912fdd7ecb55b2d2ecf29ab470b3173d3\"" Mar 25 01:17:59.046158 containerd[1757]: time="2025-03-25T01:17:59.046130302Z" level=info msg="connecting to shim 7ba14c2e59c7d820ec3aec927cd74df912fdd7ecb55b2d2ecf29ab470b3173d3" address="unix:///run/containerd/s/0cd2f2f92e5b41a2d769f2c7274ca6e68cfb2d1f1e7d22e424dad0686a748bea" protocol=ttrpc version=3 Mar 25 01:17:59.046718 containerd[1757]: time="2025-03-25T01:17:59.046165942Z" level=info msg="Container d45099e5942abab3700d29a66f35595a8f837946c3dbfa8aba0d6c5fc270df95: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:59.049060 systemd[1]: Started cri-containerd-f6e3b26603edac2cc4284b72ec6fc7b817405bbe27c003a50c23107e9f2c6971.scope - libcontainer container f6e3b26603edac2cc4284b72ec6fc7b817405bbe27c003a50c23107e9f2c6971. Mar 25 01:17:59.064769 systemd[1]: Started cri-containerd-7ba14c2e59c7d820ec3aec927cd74df912fdd7ecb55b2d2ecf29ab470b3173d3.scope - libcontainer container 7ba14c2e59c7d820ec3aec927cd74df912fdd7ecb55b2d2ecf29ab470b3173d3. Mar 25 01:17:59.068565 containerd[1757]: time="2025-03-25T01:17:59.068443096Z" level=info msg="CreateContainer within sandbox \"b80a22a422189996436d8ccc9afb9f355ec2141b7f144d01b8638ec8a797cd36\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d45099e5942abab3700d29a66f35595a8f837946c3dbfa8aba0d6c5fc270df95\"" Mar 25 01:17:59.069183 containerd[1757]: time="2025-03-25T01:17:59.069135335Z" level=info msg="StartContainer for \"d45099e5942abab3700d29a66f35595a8f837946c3dbfa8aba0d6c5fc270df95\"" Mar 25 01:17:59.072298 containerd[1757]: time="2025-03-25T01:17:59.071882535Z" level=info msg="connecting to shim d45099e5942abab3700d29a66f35595a8f837946c3dbfa8aba0d6c5fc270df95" address="unix:///run/containerd/s/15af002f17d8e16d9addacbf7d4d311c248b9fdbe73c82f7806349f7b7d74e9c" protocol=ttrpc version=3 Mar 25 01:17:59.093044 systemd[1]: Started cri-containerd-d45099e5942abab3700d29a66f35595a8f837946c3dbfa8aba0d6c5fc270df95.scope - libcontainer container d45099e5942abab3700d29a66f35595a8f837946c3dbfa8aba0d6c5fc270df95. Mar 25 01:17:59.144827 containerd[1757]: time="2025-03-25T01:17:59.144381235Z" level=info msg="StartContainer for \"7ba14c2e59c7d820ec3aec927cd74df912fdd7ecb55b2d2ecf29ab470b3173d3\" returns successfully" Mar 25 01:17:59.159396 containerd[1757]: time="2025-03-25T01:17:59.159106511Z" level=info msg="StartContainer for \"f6e3b26603edac2cc4284b72ec6fc7b817405bbe27c003a50c23107e9f2c6971\" returns successfully" Mar 25 01:17:59.171589 containerd[1757]: time="2025-03-25T01:17:59.171536628Z" level=info msg="StartContainer for \"d45099e5942abab3700d29a66f35595a8f837946c3dbfa8aba0d6c5fc270df95\" returns successfully" Mar 25 01:18:00.135288 kubelet[2951]: E0325 01:18:00.135251 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:00.142419 kubelet[2951]: E0325 01:18:00.142393 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:00.143723 kubelet[2951]: E0325 01:18:00.142989 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.054221 kubelet[2951]: I0325 01:18:01.053982 2951 apiserver.go:52] "Watching apiserver" Mar 25 01:18:01.068973 kubelet[2951]: I0325 01:18:01.068925 2951 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:18:01.146679 kubelet[2951]: E0325 01:18:01.146158 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.146679 kubelet[2951]: E0325 01:18:01.146491 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.148044 kubelet[2951]: E0325 01:18:01.148028 2951 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.469755 kubelet[2951]: E0325 01:18:01.469722 2951 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4284.0.0-a-f1ebfb6c0b" not found Mar 25 01:18:01.518990 kubelet[2951]: I0325 01:18:01.518732 2951 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.525689 kubelet[2951]: I0325 01:18:01.525643 2951 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.567420 kubelet[2951]: I0325 01:18:01.567243 2951 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.575545 kubelet[2951]: E0325 01:18:01.575511 2951 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.575545 kubelet[2951]: I0325 01:18:01.575540 2951 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.577460 kubelet[2951]: E0325 01:18:01.577287 2951 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.577460 kubelet[2951]: I0325 01:18:01.577309 2951 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:01.580081 kubelet[2951]: E0325 01:18:01.580059 2951 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:02.148384 kubelet[2951]: I0325 01:18:02.148258 2951 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:02.150228 kubelet[2951]: I0325 01:18:02.148648 2951 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:02.166775 kubelet[2951]: W0325 01:18:02.166741 2951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:02.173849 kubelet[2951]: W0325 01:18:02.173831 2951 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:03.434401 systemd[1]: Reload requested from client PID 3222 ('systemctl') (unit session-9.scope)... Mar 25 01:18:03.435210 systemd[1]: Reloading... Mar 25 01:18:03.519635 zram_generator::config[3269]: No configuration found. Mar 25 01:18:03.667330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:18:03.804902 systemd[1]: Reloading finished in 369 ms. Mar 25 01:18:03.828890 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:18:03.843170 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:18:03.843412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:18:03.843470 systemd[1]: kubelet.service: Consumed 957ms CPU time, 122.1M memory peak. Mar 25 01:18:03.846507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:18:03.985050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:18:03.988933 (kubelet)[3333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:18:04.031688 kubelet[3333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:18:04.031688 kubelet[3333]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 25 01:18:04.031688 kubelet[3333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:18:04.032021 kubelet[3333]: I0325 01:18:04.031738 3333 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:18:04.042217 kubelet[3333]: I0325 01:18:04.042178 3333 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 25 01:18:04.044035 kubelet[3333]: I0325 01:18:04.043717 3333 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:18:04.044134 kubelet[3333]: I0325 01:18:04.044125 3333 server.go:954] "Client rotation is on, will bootstrap in background" Mar 25 01:18:04.046250 kubelet[3333]: I0325 01:18:04.046165 3333 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:18:04.050211 kubelet[3333]: I0325 01:18:04.049498 3333 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:18:04.053646 kubelet[3333]: I0325 01:18:04.053563 3333 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:18:04.059380 kubelet[3333]: I0325 01:18:04.058465 3333 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:18:04.059882 kubelet[3333]: I0325 01:18:04.059475 3333 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:18:04.059882 kubelet[3333]: I0325 01:18:04.059514 3333 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-f1ebfb6c0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:18:04.059882 kubelet[3333]: I0325 01:18:04.059748 3333 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:18:04.059882 kubelet[3333]: I0325 01:18:04.059757 3333 container_manager_linux.go:304] "Creating device plugin manager" Mar 25 01:18:04.060298 kubelet[3333]: I0325 01:18:04.059806 3333 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:18:04.060298 kubelet[3333]: I0325 01:18:04.059926 3333 kubelet.go:446] "Attempting to sync node with API server" Mar 25 01:18:04.060298 kubelet[3333]: I0325 01:18:04.059937 3333 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:18:04.060298 kubelet[3333]: I0325 01:18:04.059958 3333 kubelet.go:352] "Adding apiserver pod source" Mar 25 01:18:04.060298 kubelet[3333]: I0325 01:18:04.059971 3333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:18:04.062865 kubelet[3333]: I0325 01:18:04.062664 3333 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:18:04.063414 kubelet[3333]: I0325 01:18:04.063398 3333 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:18:04.064208 kubelet[3333]: I0325 01:18:04.064123 3333 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 25 01:18:04.064208 kubelet[3333]: I0325 01:18:04.064157 3333 server.go:1287] "Started kubelet" Mar 25 01:18:04.066779 kubelet[3333]: I0325 01:18:04.066640 3333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:18:04.068087 kubelet[3333]: I0325 01:18:04.068052 3333 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:18:04.072292 kubelet[3333]: I0325 01:18:04.072239 3333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:18:04.072886 kubelet[3333]: I0325 01:18:04.072691 3333 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:18:04.074339 kubelet[3333]: I0325 01:18:04.074313 3333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:18:04.077220 kubelet[3333]: I0325 01:18:04.077194 3333 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 25 01:18:04.077439 kubelet[3333]: E0325 01:18:04.077419 3333 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-f1ebfb6c0b\" not found" Mar 25 01:18:04.078370 kubelet[3333]: I0325 01:18:04.078352 3333 server.go:490] "Adding debug handlers to kubelet server" Mar 25 01:18:04.080523 kubelet[3333]: I0325 01:18:04.080021 3333 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:18:04.080523 kubelet[3333]: I0325 01:18:04.080144 3333 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:18:04.084119 kubelet[3333]: I0325 01:18:04.084098 3333 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:18:04.085170 kubelet[3333]: I0325 01:18:04.085145 3333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:18:04.089713 kubelet[3333]: I0325 01:18:04.089680 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:18:04.091060 kubelet[3333]: I0325 01:18:04.091025 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:18:04.091060 kubelet[3333]: I0325 01:18:04.091054 3333 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 25 01:18:04.091150 kubelet[3333]: I0325 01:18:04.091073 3333 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 25 01:18:04.091150 kubelet[3333]: I0325 01:18:04.091080 3333 kubelet.go:2388] "Starting kubelet main sync loop" Mar 25 01:18:04.091150 kubelet[3333]: E0325 01:18:04.091116 3333 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:18:04.102212 kubelet[3333]: I0325 01:18:04.102176 3333 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:18:04.175252 kubelet[3333]: I0325 01:18:04.175227 3333 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.175330 3333 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.175352 3333 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:18:04.342561 kubelet[3333]: E0325 01:18:04.191804 3333 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.341339 3333 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.341362 3333 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.341381 3333 policy_none.go:49] "None policy: Start" Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.341392 3333 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.341403 3333 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:18:04.342561 kubelet[3333]: I0325 01:18:04.341501 3333 state_mem.go:75] "Updated machine memory state" Mar 25 01:18:04.346667 kubelet[3333]: I0325 01:18:04.346280 3333 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:18:04.346667 kubelet[3333]: I0325 01:18:04.346449 3333 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:18:04.346667 kubelet[3333]: I0325 01:18:04.346460 3333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:18:04.346799 kubelet[3333]: I0325 01:18:04.346755 3333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:18:04.348623 kubelet[3333]: E0325 01:18:04.348396 3333 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 25 01:18:04.392853 kubelet[3333]: I0325 01:18:04.392650 3333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.392853 kubelet[3333]: I0325 01:18:04.392719 3333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.392853 kubelet[3333]: I0325 01:18:04.392664 3333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.401809 kubelet[3333]: W0325 01:18:04.401749 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:04.405789 kubelet[3333]: W0325 01:18:04.405758 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:04.405950 kubelet[3333]: E0325 01:18:04.405827 3333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.406067 kubelet[3333]: W0325 01:18:04.406048 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:04.406121 kubelet[3333]: E0325 01:18:04.406098 3333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.456809 kubelet[3333]: I0325 01:18:04.456779 3333 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.468491 kubelet[3333]: I0325 01:18:04.468179 3333 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.468491 kubelet[3333]: I0325 01:18:04.468249 3333 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.492955 kubelet[3333]: I0325 01:18:04.492920 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.492955 kubelet[3333]: I0325 01:18:04.492955 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092b551cbbb0928fd47f7c693764ad9f-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"092b551cbbb0928fd47f7c693764ad9f\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.493098 kubelet[3333]: I0325 01:18:04.492971 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.493098 kubelet[3333]: I0325 01:18:04.492989 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.493098 kubelet[3333]: I0325 01:18:04.493006 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.493098 kubelet[3333]: I0325 01:18:04.493021 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092b551cbbb0928fd47f7c693764ad9f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"092b551cbbb0928fd47f7c693764ad9f\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.493098 kubelet[3333]: I0325 01:18:04.493035 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22b1c7504578d0b88b7a79abd4305a43-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"22b1c7504578d0b88b7a79abd4305a43\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.493210 kubelet[3333]: I0325 01:18:04.493050 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ffdcf284c1693bfb0b69b0490ef46c09-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"ffdcf284c1693bfb0b69b0490ef46c09\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:04.493210 kubelet[3333]: I0325 01:18:04.493064 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092b551cbbb0928fd47f7c693764ad9f-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" (UID: \"092b551cbbb0928fd47f7c693764ad9f\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:05.072130 kubelet[3333]: I0325 01:18:05.071857 3333 apiserver.go:52] "Watching apiserver" Mar 25 01:18:05.156363 kubelet[3333]: I0325 01:18:05.156329 3333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:05.156675 kubelet[3333]: I0325 01:18:05.156650 3333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:05.156964 kubelet[3333]: I0325 01:18:05.156940 3333 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:05.181205 kubelet[3333]: W0325 01:18:05.181117 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:05.181205 kubelet[3333]: W0325 01:18:05.181162 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:05.181578 kubelet[3333]: E0325 01:18:05.181308 3333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:05.181852 kubelet[3333]: W0325 01:18:05.181839 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:18:05.182111 kubelet[3333]: E0325 01:18:05.181996 3333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:05.182347 kubelet[3333]: E0325 01:18:05.182289 3333 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:05.182729 kubelet[3333]: I0325 01:18:05.182679 3333 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:18:05.231401 kubelet[3333]: I0325 01:18:05.230996 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-f1ebfb6c0b" podStartSLOduration=1.230975799 podStartE2EDuration="1.230975799s" podCreationTimestamp="2025-03-25 01:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:05.205280209 +0000 UTC m=+1.213610523" watchObservedRunningTime="2025-03-25 01:18:05.230975799 +0000 UTC m=+1.239306153" Mar 25 01:18:05.252660 kubelet[3333]: I0325 01:18:05.252471 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-a-f1ebfb6c0b" podStartSLOduration=3.25245071 podStartE2EDuration="3.25245071s" podCreationTimestamp="2025-03-25 01:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:05.231286079 +0000 UTC m=+1.239616393" watchObservedRunningTime="2025-03-25 01:18:05.25245071 +0000 UTC m=+1.260781064" Mar 25 01:18:08.314644 kubelet[3333]: I0325 01:18:08.314459 3333 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:18:08.316782 containerd[1757]: time="2025-03-25T01:18:08.316110646Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:18:08.317070 kubelet[3333]: I0325 01:18:08.316546 3333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:18:08.595063 kubelet[3333]: I0325 01:18:08.595001 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-a-f1ebfb6c0b" podStartSLOduration=6.5949842400000005 podStartE2EDuration="6.59498424s" podCreationTimestamp="2025-03-25 01:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:05.25341415 +0000 UTC m=+1.261744504" watchObservedRunningTime="2025-03-25 01:18:08.59498424 +0000 UTC m=+4.603314594" Mar 25 01:18:08.601641 systemd[1]: Created slice kubepods-besteffort-pod167492e6_fd4f_4d89_a1fe_ac0f5e119c67.slice - libcontainer container kubepods-besteffort-pod167492e6_fd4f_4d89_a1fe_ac0f5e119c67.slice. Mar 25 01:18:08.606276 kubelet[3333]: I0325 01:18:08.605441 3333 status_manager.go:890] "Failed to get status for pod" podUID="167492e6-fd4f-4d89-a1fe-ac0f5e119c67" pod="kube-system/kube-proxy-k9dn4" err="pods \"kube-proxy-k9dn4\" is forbidden: User \"system:node:ci-4284.0.0-a-f1ebfb6c0b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-a-f1ebfb6c0b' and this object" Mar 25 01:18:08.606276 kubelet[3333]: W0325 01:18:08.605522 3333 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284.0.0-a-f1ebfb6c0b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284.0.0-a-f1ebfb6c0b' and this object Mar 25 01:18:08.606276 kubelet[3333]: E0325 01:18:08.605549 3333 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4284.0.0-a-f1ebfb6c0b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-a-f1ebfb6c0b' and this object" logger="UnhandledError" Mar 25 01:18:08.622374 kubelet[3333]: I0325 01:18:08.622312 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/167492e6-fd4f-4d89-a1fe-ac0f5e119c67-kube-proxy\") pod \"kube-proxy-k9dn4\" (UID: \"167492e6-fd4f-4d89-a1fe-ac0f5e119c67\") " pod="kube-system/kube-proxy-k9dn4" Mar 25 01:18:08.622374 kubelet[3333]: I0325 01:18:08.622358 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/167492e6-fd4f-4d89-a1fe-ac0f5e119c67-xtables-lock\") pod \"kube-proxy-k9dn4\" (UID: \"167492e6-fd4f-4d89-a1fe-ac0f5e119c67\") " pod="kube-system/kube-proxy-k9dn4" Mar 25 01:18:08.622374 kubelet[3333]: I0325 01:18:08.622384 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/167492e6-fd4f-4d89-a1fe-ac0f5e119c67-lib-modules\") pod \"kube-proxy-k9dn4\" (UID: \"167492e6-fd4f-4d89-a1fe-ac0f5e119c67\") " pod="kube-system/kube-proxy-k9dn4" Mar 25 01:18:08.622374 kubelet[3333]: I0325 01:18:08.622405 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wcl\" (UniqueName: \"kubernetes.io/projected/167492e6-fd4f-4d89-a1fe-ac0f5e119c67-kube-api-access-b8wcl\") pod \"kube-proxy-k9dn4\" (UID: \"167492e6-fd4f-4d89-a1fe-ac0f5e119c67\") " pod="kube-system/kube-proxy-k9dn4" Mar 25 01:18:09.179529 sudo[2252]: pam_unix(sudo:session): session closed for user root Mar 25 01:18:09.268100 sshd[2251]: Connection closed by 10.200.16.10 port 50578 Mar 25 01:18:09.269294 sshd-session[2249]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:09.272554 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:50578.service: Deactivated successfully. Mar 25 01:18:09.272914 systemd-logind[1738]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:18:09.276540 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:18:09.276902 systemd[1]: session-9.scope: Consumed 5.663s CPU time, 228.4M memory peak. Mar 25 01:18:09.279133 systemd-logind[1738]: Removed session 9. Mar 25 01:18:09.333041 systemd[1]: Created slice kubepods-besteffort-pod664ae582_9c5b_4a3f_8977_c709b43f31e1.slice - libcontainer container kubepods-besteffort-pod664ae582_9c5b_4a3f_8977_c709b43f31e1.slice. Mar 25 01:18:09.427566 kubelet[3333]: I0325 01:18:09.427526 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/664ae582-9c5b-4a3f-8977-c709b43f31e1-var-lib-calico\") pod \"tigera-operator-ccfc44587-j74rz\" (UID: \"664ae582-9c5b-4a3f-8977-c709b43f31e1\") " pod="tigera-operator/tigera-operator-ccfc44587-j74rz" Mar 25 01:18:09.427566 kubelet[3333]: I0325 01:18:09.427568 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjd9\" (UniqueName: \"kubernetes.io/projected/664ae582-9c5b-4a3f-8977-c709b43f31e1-kube-api-access-bwjd9\") pod \"tigera-operator-ccfc44587-j74rz\" (UID: \"664ae582-9c5b-4a3f-8977-c709b43f31e1\") " pod="tigera-operator/tigera-operator-ccfc44587-j74rz" Mar 25 01:18:09.636855 containerd[1757]: time="2025-03-25T01:18:09.636794779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-j74rz,Uid:664ae582-9c5b-4a3f-8977-c709b43f31e1,Namespace:tigera-operator,Attempt:0,}" Mar 25 01:18:09.691176 containerd[1757]: time="2025-03-25T01:18:09.690753337Z" level=info msg="connecting to shim be4c4c10a976d27119f54e68f7bf1c4ccd540254bdf6dec72f8b9c23ce85d403" address="unix:///run/containerd/s/9bf22f635818660c97428dd4842964fc3206253c173e6c524d4a8ab832628524" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:09.711846 systemd[1]: Started cri-containerd-be4c4c10a976d27119f54e68f7bf1c4ccd540254bdf6dec72f8b9c23ce85d403.scope - libcontainer container be4c4c10a976d27119f54e68f7bf1c4ccd540254bdf6dec72f8b9c23ce85d403. Mar 25 01:18:09.752365 containerd[1757]: time="2025-03-25T01:18:09.752309216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-j74rz,Uid:664ae582-9c5b-4a3f-8977-c709b43f31e1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"be4c4c10a976d27119f54e68f7bf1c4ccd540254bdf6dec72f8b9c23ce85d403\"" Mar 25 01:18:09.754983 containerd[1757]: time="2025-03-25T01:18:09.754946776Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 25 01:18:09.807447 containerd[1757]: time="2025-03-25T01:18:09.807409215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9dn4,Uid:167492e6-fd4f-4d89-a1fe-ac0f5e119c67,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:09.870047 containerd[1757]: time="2025-03-25T01:18:09.869810494Z" level=info msg="connecting to shim 9d42a17fc44752c48ed1379ad4c644d6568f0d3e4d72f8cfe3ab06c951d27f0d" address="unix:///run/containerd/s/649c72f6f387f7153ce02fe330659dd577e3ff2cdb77015b7c5ccf39c71dfa3c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:09.890765 systemd[1]: Started cri-containerd-9d42a17fc44752c48ed1379ad4c644d6568f0d3e4d72f8cfe3ab06c951d27f0d.scope - libcontainer container 9d42a17fc44752c48ed1379ad4c644d6568f0d3e4d72f8cfe3ab06c951d27f0d. Mar 25 01:18:09.919346 containerd[1757]: time="2025-03-25T01:18:09.919300653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9dn4,Uid:167492e6-fd4f-4d89-a1fe-ac0f5e119c67,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d42a17fc44752c48ed1379ad4c644d6568f0d3e4d72f8cfe3ab06c951d27f0d\"" Mar 25 01:18:09.924634 containerd[1757]: time="2025-03-25T01:18:09.923066413Z" level=info msg="CreateContainer within sandbox \"9d42a17fc44752c48ed1379ad4c644d6568f0d3e4d72f8cfe3ab06c951d27f0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:18:09.957146 containerd[1757]: time="2025-03-25T01:18:09.957106452Z" level=info msg="Container dce3909b7a9e5a462a237553d5abbde54e18e2ac309d4bc82bb7d703a41ab440: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:09.979000 containerd[1757]: time="2025-03-25T01:18:09.978955691Z" level=info msg="CreateContainer within sandbox \"9d42a17fc44752c48ed1379ad4c644d6568f0d3e4d72f8cfe3ab06c951d27f0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dce3909b7a9e5a462a237553d5abbde54e18e2ac309d4bc82bb7d703a41ab440\"" Mar 25 01:18:09.979665 containerd[1757]: time="2025-03-25T01:18:09.979622171Z" level=info msg="StartContainer for \"dce3909b7a9e5a462a237553d5abbde54e18e2ac309d4bc82bb7d703a41ab440\"" Mar 25 01:18:09.981959 containerd[1757]: time="2025-03-25T01:18:09.981925211Z" level=info msg="connecting to shim dce3909b7a9e5a462a237553d5abbde54e18e2ac309d4bc82bb7d703a41ab440" address="unix:///run/containerd/s/649c72f6f387f7153ce02fe330659dd577e3ff2cdb77015b7c5ccf39c71dfa3c" protocol=ttrpc version=3 Mar 25 01:18:10.001805 systemd[1]: Started cri-containerd-dce3909b7a9e5a462a237553d5abbde54e18e2ac309d4bc82bb7d703a41ab440.scope - libcontainer container dce3909b7a9e5a462a237553d5abbde54e18e2ac309d4bc82bb7d703a41ab440. Mar 25 01:18:10.041977 containerd[1757]: time="2025-03-25T01:18:10.041845370Z" level=info msg="StartContainer for \"dce3909b7a9e5a462a237553d5abbde54e18e2ac309d4bc82bb7d703a41ab440\" returns successfully" Mar 25 01:18:10.180708 kubelet[3333]: I0325 01:18:10.179947 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9dn4" podStartSLOduration=2.179929727 podStartE2EDuration="2.179929727s" podCreationTimestamp="2025-03-25 01:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:10.177586767 +0000 UTC m=+6.185917161" watchObservedRunningTime="2025-03-25 01:18:10.179929727 +0000 UTC m=+6.188260081" Mar 25 01:18:12.551309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844313534.mount: Deactivated successfully. Mar 25 01:18:12.938658 containerd[1757]: time="2025-03-25T01:18:12.938186430Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:12.940857 containerd[1757]: time="2025-03-25T01:18:12.940811950Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 25 01:18:12.946101 containerd[1757]: time="2025-03-25T01:18:12.946046590Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:12.951068 containerd[1757]: time="2025-03-25T01:18:12.950460070Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:12.951068 containerd[1757]: time="2025-03-25T01:18:12.950948470Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 3.195960694s" Mar 25 01:18:12.951068 containerd[1757]: time="2025-03-25T01:18:12.950977910Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 25 01:18:12.954307 containerd[1757]: time="2025-03-25T01:18:12.954274870Z" level=info msg="CreateContainer within sandbox \"be4c4c10a976d27119f54e68f7bf1c4ccd540254bdf6dec72f8b9c23ce85d403\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 25 01:18:12.974390 containerd[1757]: time="2025-03-25T01:18:12.973537709Z" level=info msg="Container 93c07ff1b175025dadd6de0b645d1f5976e91b75b93d5518aca59f8765aef89c: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:12.990365 containerd[1757]: time="2025-03-25T01:18:12.990250949Z" level=info msg="CreateContainer within sandbox \"be4c4c10a976d27119f54e68f7bf1c4ccd540254bdf6dec72f8b9c23ce85d403\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"93c07ff1b175025dadd6de0b645d1f5976e91b75b93d5518aca59f8765aef89c\"" Mar 25 01:18:12.990835 containerd[1757]: time="2025-03-25T01:18:12.990704349Z" level=info msg="StartContainer for \"93c07ff1b175025dadd6de0b645d1f5976e91b75b93d5518aca59f8765aef89c\"" Mar 25 01:18:12.991905 containerd[1757]: time="2025-03-25T01:18:12.991840669Z" level=info msg="connecting to shim 93c07ff1b175025dadd6de0b645d1f5976e91b75b93d5518aca59f8765aef89c" address="unix:///run/containerd/s/9bf22f635818660c97428dd4842964fc3206253c173e6c524d4a8ab832628524" protocol=ttrpc version=3 Mar 25 01:18:13.014813 systemd[1]: Started cri-containerd-93c07ff1b175025dadd6de0b645d1f5976e91b75b93d5518aca59f8765aef89c.scope - libcontainer container 93c07ff1b175025dadd6de0b645d1f5976e91b75b93d5518aca59f8765aef89c. Mar 25 01:18:13.043426 containerd[1757]: time="2025-03-25T01:18:13.043390068Z" level=info msg="StartContainer for \"93c07ff1b175025dadd6de0b645d1f5976e91b75b93d5518aca59f8765aef89c\" returns successfully" Mar 25 01:18:14.742521 kubelet[3333]: I0325 01:18:14.742124 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-ccfc44587-j74rz" podStartSLOduration=2.543320477 podStartE2EDuration="5.742111091s" podCreationTimestamp="2025-03-25 01:18:09 +0000 UTC" firstStartedPulling="2025-03-25 01:18:09.753533616 +0000 UTC m=+5.761863970" lastFinishedPulling="2025-03-25 01:18:12.95232427 +0000 UTC m=+8.960654584" observedRunningTime="2025-03-25 01:18:13.187898345 +0000 UTC m=+9.196228699" watchObservedRunningTime="2025-03-25 01:18:14.742111091 +0000 UTC m=+10.750441405" Mar 25 01:18:17.065246 systemd[1]: Created slice kubepods-besteffort-podb295f55c_ee34_4b73_b398_00c3b94a537c.slice - libcontainer container kubepods-besteffort-podb295f55c_ee34_4b73_b398_00c3b94a537c.slice. Mar 25 01:18:17.078712 kubelet[3333]: I0325 01:18:17.078615 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b295f55c-ee34-4b73-b398-00c3b94a537c-typha-certs\") pod \"calico-typha-7c9d4d85c6-ghjbp\" (UID: \"b295f55c-ee34-4b73-b398-00c3b94a537c\") " pod="calico-system/calico-typha-7c9d4d85c6-ghjbp" Mar 25 01:18:17.078712 kubelet[3333]: I0325 01:18:17.078663 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b295f55c-ee34-4b73-b398-00c3b94a537c-tigera-ca-bundle\") pod \"calico-typha-7c9d4d85c6-ghjbp\" (UID: \"b295f55c-ee34-4b73-b398-00c3b94a537c\") " pod="calico-system/calico-typha-7c9d4d85c6-ghjbp" Mar 25 01:18:17.079850 kubelet[3333]: I0325 01:18:17.079550 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lml4x\" (UniqueName: \"kubernetes.io/projected/b295f55c-ee34-4b73-b398-00c3b94a537c-kube-api-access-lml4x\") pod \"calico-typha-7c9d4d85c6-ghjbp\" (UID: \"b295f55c-ee34-4b73-b398-00c3b94a537c\") " pod="calico-system/calico-typha-7c9d4d85c6-ghjbp" Mar 25 01:18:17.273575 systemd[1]: Created slice kubepods-besteffort-podc290e755_1595_4bfc_bdcc_5fa900680b95.slice - libcontainer container kubepods-besteffort-podc290e755_1595_4bfc_bdcc_5fa900680b95.slice. Mar 25 01:18:17.371387 containerd[1757]: time="2025-03-25T01:18:17.370879318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9d4d85c6-ghjbp,Uid:b295f55c-ee34-4b73-b398-00c3b94a537c,Namespace:calico-system,Attempt:0,}" Mar 25 01:18:17.377168 kubelet[3333]: E0325 01:18:17.376499 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6vfh" podUID="dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc" Mar 25 01:18:17.382381 kubelet[3333]: I0325 01:18:17.381849 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-policysync\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382381 kubelet[3333]: I0325 01:18:17.381886 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c290e755-1595-4bfc-bdcc-5fa900680b95-tigera-ca-bundle\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382381 kubelet[3333]: I0325 01:18:17.381902 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-run-calico\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382381 kubelet[3333]: I0325 01:18:17.381917 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-lib-calico\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382381 kubelet[3333]: I0325 01:18:17.381936 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c290e755-1595-4bfc-bdcc-5fa900680b95-node-certs\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382996 kubelet[3333]: I0325 01:18:17.381949 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-bin-dir\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382996 kubelet[3333]: I0325 01:18:17.381965 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbplc\" (UniqueName: \"kubernetes.io/projected/c290e755-1595-4bfc-bdcc-5fa900680b95-kube-api-access-gbplc\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382996 kubelet[3333]: I0325 01:18:17.381981 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-log-dir\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382996 kubelet[3333]: I0325 01:18:17.381999 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-lib-modules\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.382996 kubelet[3333]: I0325 01:18:17.382016 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-net-dir\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.383158 kubelet[3333]: I0325 01:18:17.382036 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-flexvol-driver-host\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.383158 kubelet[3333]: I0325 01:18:17.382052 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-xtables-lock\") pod \"calico-node-7znqv\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " pod="calico-system/calico-node-7znqv" Mar 25 01:18:17.437154 containerd[1757]: time="2025-03-25T01:18:17.435516374Z" level=info msg="connecting to shim e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc" address="unix:///run/containerd/s/2b1efd9c5ed8c1c8d958b5763a4e226dd92fc9c52f38057e9bd42c2d5619c014" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:17.471800 systemd[1]: Started cri-containerd-e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc.scope - libcontainer container e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc. Mar 25 01:18:17.482709 kubelet[3333]: I0325 01:18:17.482666 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc-kubelet-dir\") pod \"csi-node-driver-d6vfh\" (UID: \"dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc\") " pod="calico-system/csi-node-driver-d6vfh" Mar 25 01:18:17.482855 kubelet[3333]: I0325 01:18:17.482720 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc-varrun\") pod \"csi-node-driver-d6vfh\" (UID: \"dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc\") " pod="calico-system/csi-node-driver-d6vfh" Mar 25 01:18:17.482855 kubelet[3333]: I0325 01:18:17.482803 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc-socket-dir\") pod \"csi-node-driver-d6vfh\" (UID: \"dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc\") " pod="calico-system/csi-node-driver-d6vfh" Mar 25 01:18:17.482908 kubelet[3333]: I0325 01:18:17.482853 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc-registration-dir\") pod \"csi-node-driver-d6vfh\" (UID: \"dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc\") " pod="calico-system/csi-node-driver-d6vfh" Mar 25 01:18:17.482908 kubelet[3333]: I0325 01:18:17.482872 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xth8\" (UniqueName: \"kubernetes.io/projected/dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc-kube-api-access-2xth8\") pod \"csi-node-driver-d6vfh\" (UID: \"dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc\") " pod="calico-system/csi-node-driver-d6vfh" Mar 25 01:18:17.484578 kubelet[3333]: E0325 01:18:17.484544 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.484837 kubelet[3333]: W0325 01:18:17.484641 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.484837 kubelet[3333]: E0325 01:18:17.484669 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.486022 kubelet[3333]: E0325 01:18:17.485999 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.486022 kubelet[3333]: W0325 01:18:17.486017 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.486716 kubelet[3333]: E0325 01:18:17.486102 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.487150 kubelet[3333]: E0325 01:18:17.486985 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.487150 kubelet[3333]: W0325 01:18:17.487000 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.487150 kubelet[3333]: E0325 01:18:17.487030 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.487437 kubelet[3333]: E0325 01:18:17.487331 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.487437 kubelet[3333]: W0325 01:18:17.487344 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.487995 kubelet[3333]: E0325 01:18:17.487957 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.487995 kubelet[3333]: W0325 01:18:17.487972 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.488577 kubelet[3333]: E0325 01:18:17.488307 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.488577 kubelet[3333]: W0325 01:18:17.488320 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.488577 kubelet[3333]: E0325 01:18:17.488434 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.488577 kubelet[3333]: E0325 01:18:17.488457 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.488577 kubelet[3333]: E0325 01:18:17.488469 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.489912 kubelet[3333]: E0325 01:18:17.489749 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.489912 kubelet[3333]: W0325 01:18:17.489780 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.489912 kubelet[3333]: E0325 01:18:17.489856 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.490223 kubelet[3333]: E0325 01:18:17.490155 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.490223 kubelet[3333]: W0325 01:18:17.490168 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.490223 kubelet[3333]: E0325 01:18:17.490201 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.490611 kubelet[3333]: E0325 01:18:17.490527 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.490611 kubelet[3333]: W0325 01:18:17.490551 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.490611 kubelet[3333]: E0325 01:18:17.490609 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.491068 kubelet[3333]: E0325 01:18:17.490953 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.491068 kubelet[3333]: W0325 01:18:17.490968 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.491068 kubelet[3333]: E0325 01:18:17.490999 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.491947 kubelet[3333]: E0325 01:18:17.491779 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.491947 kubelet[3333]: W0325 01:18:17.491796 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.491947 kubelet[3333]: E0325 01:18:17.491839 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.492118 kubelet[3333]: E0325 01:18:17.492105 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.492209 kubelet[3333]: W0325 01:18:17.492197 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.492305 kubelet[3333]: E0325 01:18:17.492281 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.492752 kubelet[3333]: E0325 01:18:17.492571 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.492752 kubelet[3333]: W0325 01:18:17.492587 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.493194 kubelet[3333]: E0325 01:18:17.493090 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.493194 kubelet[3333]: W0325 01:18:17.493105 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.493707 kubelet[3333]: E0325 01:18:17.493629 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.493707 kubelet[3333]: E0325 01:18:17.493654 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.493707 kubelet[3333]: E0325 01:18:17.493682 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.493707 kubelet[3333]: W0325 01:18:17.493691 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.494018 kubelet[3333]: E0325 01:18:17.493938 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.494423 kubelet[3333]: E0325 01:18:17.494355 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.494423 kubelet[3333]: W0325 01:18:17.494373 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.494957 kubelet[3333]: E0325 01:18:17.494686 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.495400 kubelet[3333]: E0325 01:18:17.495364 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.495587 kubelet[3333]: W0325 01:18:17.495571 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.495816 kubelet[3333]: E0325 01:18:17.495686 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.496303 kubelet[3333]: E0325 01:18:17.496173 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.496303 kubelet[3333]: W0325 01:18:17.496193 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.496303 kubelet[3333]: E0325 01:18:17.496271 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.497238 kubelet[3333]: E0325 01:18:17.497219 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.497473 kubelet[3333]: W0325 01:18:17.497361 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.497473 kubelet[3333]: E0325 01:18:17.497450 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.498463 kubelet[3333]: E0325 01:18:17.498338 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.498463 kubelet[3333]: W0325 01:18:17.498360 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.498463 kubelet[3333]: E0325 01:18:17.498394 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.499329 kubelet[3333]: E0325 01:18:17.499200 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.499329 kubelet[3333]: W0325 01:18:17.499215 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.499329 kubelet[3333]: E0325 01:18:17.499247 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.499832 kubelet[3333]: E0325 01:18:17.499746 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.499832 kubelet[3333]: W0325 01:18:17.499758 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.500955 kubelet[3333]: E0325 01:18:17.500843 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.500955 kubelet[3333]: W0325 01:18:17.500860 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.501957 kubelet[3333]: E0325 01:18:17.501540 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.501957 kubelet[3333]: W0325 01:18:17.501557 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.501957 kubelet[3333]: E0325 01:18:17.501574 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.503479 kubelet[3333]: E0325 01:18:17.502975 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.503479 kubelet[3333]: W0325 01:18:17.502987 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.503479 kubelet[3333]: E0325 01:18:17.502999 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.503479 kubelet[3333]: E0325 01:18:17.503219 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.503961 kubelet[3333]: E0325 01:18:17.503871 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.503961 kubelet[3333]: W0325 01:18:17.503887 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.503961 kubelet[3333]: E0325 01:18:17.503909 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.504153 kubelet[3333]: E0325 01:18:17.504087 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.505459 kubelet[3333]: E0325 01:18:17.504926 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.505459 kubelet[3333]: W0325 01:18:17.504947 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.505459 kubelet[3333]: E0325 01:18:17.504960 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.517116 kubelet[3333]: E0325 01:18:17.516827 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.517116 kubelet[3333]: W0325 01:18:17.516852 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.517116 kubelet[3333]: E0325 01:18:17.516873 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.520375 kubelet[3333]: E0325 01:18:17.520347 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.520375 kubelet[3333]: W0325 01:18:17.520367 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.520547 kubelet[3333]: E0325 01:18:17.520387 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.575246 containerd[1757]: time="2025-03-25T01:18:17.574751803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9d4d85c6-ghjbp,Uid:b295f55c-ee34-4b73-b398-00c3b94a537c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\"" Mar 25 01:18:17.578547 containerd[1757]: time="2025-03-25T01:18:17.578491041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7znqv,Uid:c290e755-1595-4bfc-bdcc-5fa900680b95,Namespace:calico-system,Attempt:0,}" Mar 25 01:18:17.578965 containerd[1757]: time="2025-03-25T01:18:17.578939761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 25 01:18:17.584776 kubelet[3333]: E0325 01:18:17.584661 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.584776 kubelet[3333]: W0325 01:18:17.584701 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.584776 kubelet[3333]: E0325 01:18:17.584720 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.585402 kubelet[3333]: E0325 01:18:17.585228 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.585402 kubelet[3333]: W0325 01:18:17.585240 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.585402 kubelet[3333]: E0325 01:18:17.585319 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.586301 kubelet[3333]: E0325 01:18:17.586211 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.586301 kubelet[3333]: W0325 01:18:17.586224 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.588229 kubelet[3333]: E0325 01:18:17.587780 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.588431 kubelet[3333]: E0325 01:18:17.588364 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.588431 kubelet[3333]: W0325 01:18:17.588375 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.589206 kubelet[3333]: E0325 01:18:17.588685 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.589206 kubelet[3333]: W0325 01:18:17.588696 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.589206 kubelet[3333]: E0325 01:18:17.588709 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.589206 kubelet[3333]: E0325 01:18:17.588846 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.589206 kubelet[3333]: W0325 01:18:17.588854 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.589206 kubelet[3333]: E0325 01:18:17.588862 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.589206 kubelet[3333]: E0325 01:18:17.588969 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.589206 kubelet[3333]: W0325 01:18:17.588975 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.589206 kubelet[3333]: E0325 01:18:17.588982 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.589206 kubelet[3333]: E0325 01:18:17.589125 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.592092 kubelet[3333]: W0325 01:18:17.589132 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.592092 kubelet[3333]: E0325 01:18:17.589139 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.592092 kubelet[3333]: E0325 01:18:17.588396 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.592092 kubelet[3333]: E0325 01:18:17.590483 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.592092 kubelet[3333]: W0325 01:18:17.590500 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.592092 kubelet[3333]: E0325 01:18:17.590575 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.592092 kubelet[3333]: E0325 01:18:17.590756 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.592092 kubelet[3333]: W0325 01:18:17.590765 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.592092 kubelet[3333]: E0325 01:18:17.590789 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.592092 kubelet[3333]: E0325 01:18:17.590941 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.592306 kubelet[3333]: W0325 01:18:17.590951 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.592306 kubelet[3333]: E0325 01:18:17.591041 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.592306 kubelet[3333]: E0325 01:18:17.591119 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.592306 kubelet[3333]: W0325 01:18:17.591128 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.592306 kubelet[3333]: E0325 01:18:17.591155 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.592306 kubelet[3333]: E0325 01:18:17.591294 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.592306 kubelet[3333]: W0325 01:18:17.591306 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.592306 kubelet[3333]: E0325 01:18:17.591326 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.592306 kubelet[3333]: E0325 01:18:17.591555 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.592306 kubelet[3333]: W0325 01:18:17.591567 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.592494 kubelet[3333]: E0325 01:18:17.591583 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.593943 kubelet[3333]: E0325 01:18:17.593689 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.593943 kubelet[3333]: W0325 01:18:17.593708 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.593943 kubelet[3333]: E0325 01:18:17.593732 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.593943 kubelet[3333]: E0325 01:18:17.593943 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.594703 kubelet[3333]: W0325 01:18:17.593952 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.594703 kubelet[3333]: E0325 01:18:17.593969 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.594703 kubelet[3333]: E0325 01:18:17.594097 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.594703 kubelet[3333]: W0325 01:18:17.594105 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.594703 kubelet[3333]: E0325 01:18:17.594113 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.594703 kubelet[3333]: E0325 01:18:17.594275 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.594703 kubelet[3333]: W0325 01:18:17.594284 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.594703 kubelet[3333]: E0325 01:18:17.594292 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.594703 kubelet[3333]: E0325 01:18:17.594438 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.594703 kubelet[3333]: W0325 01:18:17.594446 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.596783 kubelet[3333]: E0325 01:18:17.594455 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.596783 kubelet[3333]: E0325 01:18:17.594732 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.596783 kubelet[3333]: W0325 01:18:17.594742 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.596783 kubelet[3333]: E0325 01:18:17.594759 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.596783 kubelet[3333]: E0325 01:18:17.594898 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.596783 kubelet[3333]: W0325 01:18:17.594907 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.596783 kubelet[3333]: E0325 01:18:17.594921 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.596783 kubelet[3333]: E0325 01:18:17.595121 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.596783 kubelet[3333]: W0325 01:18:17.595132 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.596783 kubelet[3333]: E0325 01:18:17.595237 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.597750 kubelet[3333]: E0325 01:18:17.595803 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.597750 kubelet[3333]: W0325 01:18:17.596129 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.597750 kubelet[3333]: E0325 01:18:17.596624 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.598748 kubelet[3333]: E0325 01:18:17.598117 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.598748 kubelet[3333]: W0325 01:18:17.598133 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.598748 kubelet[3333]: E0325 01:18:17.598150 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.598923 kubelet[3333]: E0325 01:18:17.598905 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.598923 kubelet[3333]: W0325 01:18:17.598921 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.599004 kubelet[3333]: E0325 01:18:17.598933 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.610780 kubelet[3333]: E0325 01:18:17.610744 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:17.610918 kubelet[3333]: W0325 01:18:17.610905 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:17.611000 kubelet[3333]: E0325 01:18:17.610987 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:17.645927 containerd[1757]: time="2025-03-25T01:18:17.645624457Z" level=info msg="connecting to shim 7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb" address="unix:///run/containerd/s/00366e12dd770dac40ce135b8c9ff24a0641df697ac561247b001313ad0e7539" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:17.675163 systemd[1]: Started cri-containerd-7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb.scope - libcontainer container 7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb. Mar 25 01:18:17.732072 containerd[1757]: time="2025-03-25T01:18:17.730854666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7znqv,Uid:c290e755-1595-4bfc-bdcc-5fa900680b95,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\"" Mar 25 01:18:19.093530 kubelet[3333]: E0325 01:18:19.092785 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6vfh" podUID="dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc" Mar 25 01:18:19.340819 containerd[1757]: time="2025-03-25T01:18:19.340769355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:19.346128 containerd[1757]: time="2025-03-25T01:18:19.345934913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 25 01:18:19.356150 containerd[1757]: time="2025-03-25T01:18:19.356067470Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:19.363140 containerd[1757]: time="2025-03-25T01:18:19.363082747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:19.364163 containerd[1757]: time="2025-03-25T01:18:19.363747307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.784775426s" Mar 25 01:18:19.364163 containerd[1757]: time="2025-03-25T01:18:19.363777427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 25 01:18:19.364998 containerd[1757]: time="2025-03-25T01:18:19.364971466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 25 01:18:19.373713 containerd[1757]: time="2025-03-25T01:18:19.372778144Z" level=info msg="CreateContainer within sandbox \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 25 01:18:19.400276 containerd[1757]: time="2025-03-25T01:18:19.400226493Z" level=info msg="Container 5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:19.421740 containerd[1757]: time="2025-03-25T01:18:19.421695446Z" level=info msg="CreateContainer within sandbox \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\"" Mar 25 01:18:19.422478 containerd[1757]: time="2025-03-25T01:18:19.422414965Z" level=info msg="StartContainer for \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\"" Mar 25 01:18:19.423746 containerd[1757]: time="2025-03-25T01:18:19.423685245Z" level=info msg="connecting to shim 5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab" address="unix:///run/containerd/s/2b1efd9c5ed8c1c8d958b5763a4e226dd92fc9c52f38057e9bd42c2d5619c014" protocol=ttrpc version=3 Mar 25 01:18:19.447690 systemd[1]: Started cri-containerd-5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab.scope - libcontainer container 5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab. Mar 25 01:18:19.487751 containerd[1757]: time="2025-03-25T01:18:19.487630541Z" level=info msg="StartContainer for \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" returns successfully" Mar 25 01:18:20.279730 kubelet[3333]: E0325 01:18:20.279700 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.279730 kubelet[3333]: W0325 01:18:20.279723 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280110 kubelet[3333]: E0325 01:18:20.279742 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.280110 kubelet[3333]: E0325 01:18:20.279906 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.280110 kubelet[3333]: W0325 01:18:20.279913 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280110 kubelet[3333]: E0325 01:18:20.279948 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.280110 kubelet[3333]: E0325 01:18:20.280095 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.280110 kubelet[3333]: W0325 01:18:20.280103 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280110 kubelet[3333]: E0325 01:18:20.280110 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.280282 kubelet[3333]: E0325 01:18:20.280248 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.280282 kubelet[3333]: W0325 01:18:20.280254 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280282 kubelet[3333]: E0325 01:18:20.280261 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.280420 kubelet[3333]: E0325 01:18:20.280405 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.280420 kubelet[3333]: W0325 01:18:20.280417 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280472 kubelet[3333]: E0325 01:18:20.280425 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.280566 kubelet[3333]: E0325 01:18:20.280553 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.280566 kubelet[3333]: W0325 01:18:20.280564 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280646 kubelet[3333]: E0325 01:18:20.280571 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.280751 kubelet[3333]: E0325 01:18:20.280736 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.280751 kubelet[3333]: W0325 01:18:20.280748 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280804 kubelet[3333]: E0325 01:18:20.280757 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.280905 kubelet[3333]: E0325 01:18:20.280891 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.280905 kubelet[3333]: W0325 01:18:20.280902 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.280962 kubelet[3333]: E0325 01:18:20.280910 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.281054 kubelet[3333]: E0325 01:18:20.281045 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.281054 kubelet[3333]: W0325 01:18:20.281052 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.281097 kubelet[3333]: E0325 01:18:20.281060 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.281203 kubelet[3333]: E0325 01:18:20.281190 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.281203 kubelet[3333]: W0325 01:18:20.281201 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.281259 kubelet[3333]: E0325 01:18:20.281208 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.281349 kubelet[3333]: E0325 01:18:20.281335 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.281349 kubelet[3333]: W0325 01:18:20.281346 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.281396 kubelet[3333]: E0325 01:18:20.281354 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.281495 kubelet[3333]: E0325 01:18:20.281481 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.281495 kubelet[3333]: W0325 01:18:20.281493 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.281548 kubelet[3333]: E0325 01:18:20.281500 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.281653 kubelet[3333]: E0325 01:18:20.281638 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.281653 kubelet[3333]: W0325 01:18:20.281650 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.281709 kubelet[3333]: E0325 01:18:20.281658 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.281806 kubelet[3333]: E0325 01:18:20.281793 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.281806 kubelet[3333]: W0325 01:18:20.281803 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.281857 kubelet[3333]: E0325 01:18:20.281811 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.281963 kubelet[3333]: E0325 01:18:20.281949 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.281963 kubelet[3333]: W0325 01:18:20.281959 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.282015 kubelet[3333]: E0325 01:18:20.281968 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.308543 kubelet[3333]: E0325 01:18:20.308509 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.308543 kubelet[3333]: W0325 01:18:20.308534 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.308727 kubelet[3333]: E0325 01:18:20.308554 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.308867 kubelet[3333]: E0325 01:18:20.308844 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.308867 kubelet[3333]: W0325 01:18:20.308862 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.308935 kubelet[3333]: E0325 01:18:20.308881 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.309122 kubelet[3333]: E0325 01:18:20.309104 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.309122 kubelet[3333]: W0325 01:18:20.309118 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.309190 kubelet[3333]: E0325 01:18:20.309135 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.309376 kubelet[3333]: E0325 01:18:20.309359 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.309376 kubelet[3333]: W0325 01:18:20.309373 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.309437 kubelet[3333]: E0325 01:18:20.309388 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.309611 kubelet[3333]: E0325 01:18:20.309580 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.309662 kubelet[3333]: W0325 01:18:20.309592 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.309662 kubelet[3333]: E0325 01:18:20.309655 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.309948 kubelet[3333]: E0325 01:18:20.309928 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.309948 kubelet[3333]: W0325 01:18:20.309943 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.310020 kubelet[3333]: E0325 01:18:20.309960 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.310183 kubelet[3333]: E0325 01:18:20.310166 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.310183 kubelet[3333]: W0325 01:18:20.310179 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.310277 kubelet[3333]: E0325 01:18:20.310257 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.310664 kubelet[3333]: E0325 01:18:20.310646 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.310664 kubelet[3333]: W0325 01:18:20.310661 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.310793 kubelet[3333]: E0325 01:18:20.310736 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.310826 kubelet[3333]: E0325 01:18:20.310815 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.310826 kubelet[3333]: W0325 01:18:20.310821 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.310917 kubelet[3333]: E0325 01:18:20.310898 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.311028 kubelet[3333]: E0325 01:18:20.311014 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.311028 kubelet[3333]: W0325 01:18:20.311025 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.311073 kubelet[3333]: E0325 01:18:20.311040 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.311251 kubelet[3333]: E0325 01:18:20.311235 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.311251 kubelet[3333]: W0325 01:18:20.311248 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.311312 kubelet[3333]: E0325 01:18:20.311263 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.311436 kubelet[3333]: E0325 01:18:20.311422 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.311436 kubelet[3333]: W0325 01:18:20.311434 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.311480 kubelet[3333]: E0325 01:18:20.311448 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.311686 kubelet[3333]: E0325 01:18:20.311669 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.311686 kubelet[3333]: W0325 01:18:20.311682 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.311763 kubelet[3333]: E0325 01:18:20.311700 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.312042 kubelet[3333]: E0325 01:18:20.312021 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.312042 kubelet[3333]: W0325 01:18:20.312035 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.312117 kubelet[3333]: E0325 01:18:20.312047 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.312230 kubelet[3333]: E0325 01:18:20.312215 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.312230 kubelet[3333]: W0325 01:18:20.312228 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.312292 kubelet[3333]: E0325 01:18:20.312242 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.312454 kubelet[3333]: E0325 01:18:20.312438 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.312454 kubelet[3333]: W0325 01:18:20.312451 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.312512 kubelet[3333]: E0325 01:18:20.312466 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.312856 kubelet[3333]: E0325 01:18:20.312837 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.312856 kubelet[3333]: W0325 01:18:20.312852 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.312950 kubelet[3333]: E0325 01:18:20.312865 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.313080 kubelet[3333]: E0325 01:18:20.313060 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:18:20.313080 kubelet[3333]: W0325 01:18:20.313075 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:18:20.313142 kubelet[3333]: E0325 01:18:20.313086 3333 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:18:20.621620 containerd[1757]: time="2025-03-25T01:18:20.621555086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:20.624004 containerd[1757]: time="2025-03-25T01:18:20.623957325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 25 01:18:20.629564 containerd[1757]: time="2025-03-25T01:18:20.629517763Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:20.636332 containerd[1757]: time="2025-03-25T01:18:20.636266600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:20.637075 containerd[1757]: time="2025-03-25T01:18:20.636963440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.271786574s" Mar 25 01:18:20.637075 containerd[1757]: time="2025-03-25T01:18:20.636997680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 25 01:18:20.639303 containerd[1757]: time="2025-03-25T01:18:20.639081679Z" level=info msg="CreateContainer within sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 01:18:20.672626 containerd[1757]: time="2025-03-25T01:18:20.670866388Z" level=info msg="Container 45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:20.693942 containerd[1757]: time="2025-03-25T01:18:20.693882979Z" level=info msg="CreateContainer within sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\"" Mar 25 01:18:20.695953 containerd[1757]: time="2025-03-25T01:18:20.694782019Z" level=info msg="StartContainer for \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\"" Mar 25 01:18:20.696925 containerd[1757]: time="2025-03-25T01:18:20.696873898Z" level=info msg="connecting to shim 45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3" address="unix:///run/containerd/s/00366e12dd770dac40ce135b8c9ff24a0641df697ac561247b001313ad0e7539" protocol=ttrpc version=3 Mar 25 01:18:20.719859 systemd[1]: Started cri-containerd-45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3.scope - libcontainer container 45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3. Mar 25 01:18:20.771090 containerd[1757]: time="2025-03-25T01:18:20.769651471Z" level=info msg="StartContainer for \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" returns successfully" Mar 25 01:18:20.777862 systemd[1]: cri-containerd-45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3.scope: Deactivated successfully. Mar 25 01:18:20.780917 containerd[1757]: time="2025-03-25T01:18:20.780872267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" id:\"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" pid:3953 exited_at:{seconds:1742865500 nanos:778504108}" Mar 25 01:18:20.781533 containerd[1757]: time="2025-03-25T01:18:20.780921547Z" level=info msg="received exit event container_id:\"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" id:\"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" pid:3953 exited_at:{seconds:1742865500 nanos:778504108}" Mar 25 01:18:20.799589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3-rootfs.mount: Deactivated successfully. Mar 25 01:18:21.091946 kubelet[3333]: E0325 01:18:21.091858 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6vfh" podUID="dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc" Mar 25 01:18:21.207498 kubelet[3333]: I0325 01:18:21.207470 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:18:21.226987 kubelet[3333]: I0325 01:18:21.226750 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c9d4d85c6-ghjbp" podStartSLOduration=2.439873639 podStartE2EDuration="4.226718984s" podCreationTimestamp="2025-03-25 01:18:17 +0000 UTC" firstStartedPulling="2025-03-25 01:18:17.577545602 +0000 UTC m=+13.585875956" lastFinishedPulling="2025-03-25 01:18:19.364390947 +0000 UTC m=+15.372721301" observedRunningTime="2025-03-25 01:18:20.223157912 +0000 UTC m=+16.231488266" watchObservedRunningTime="2025-03-25 01:18:21.226718984 +0000 UTC m=+17.235049418" Mar 25 01:18:22.213718 containerd[1757]: time="2025-03-25T01:18:22.213568502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 25 01:18:23.092152 kubelet[3333]: E0325 01:18:23.092081 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6vfh" podUID="dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc" Mar 25 01:18:25.091738 kubelet[3333]: E0325 01:18:25.091683 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6vfh" podUID="dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc" Mar 25 01:18:25.153656 containerd[1757]: time="2025-03-25T01:18:25.153075644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:25.155421 containerd[1757]: time="2025-03-25T01:18:25.155351083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 25 01:18:25.160540 containerd[1757]: time="2025-03-25T01:18:25.160480602Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:25.165558 containerd[1757]: time="2025-03-25T01:18:25.165521281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:25.166466 containerd[1757]: time="2025-03-25T01:18:25.166030441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 2.952358539s" Mar 25 01:18:25.166466 containerd[1757]: time="2025-03-25T01:18:25.166061681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 25 01:18:25.169030 containerd[1757]: time="2025-03-25T01:18:25.168988521Z" level=info msg="CreateContainer within sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 01:18:25.197800 containerd[1757]: time="2025-03-25T01:18:25.197750355Z" level=info msg="Container 170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:25.221437 containerd[1757]: time="2025-03-25T01:18:25.221371670Z" level=info msg="CreateContainer within sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\"" Mar 25 01:18:25.222512 containerd[1757]: time="2025-03-25T01:18:25.222477470Z" level=info msg="StartContainer for \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\"" Mar 25 01:18:25.225966 containerd[1757]: time="2025-03-25T01:18:25.225930069Z" level=info msg="connecting to shim 170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614" address="unix:///run/containerd/s/00366e12dd770dac40ce135b8c9ff24a0641df697ac561247b001313ad0e7539" protocol=ttrpc version=3 Mar 25 01:18:25.248912 systemd[1]: Started cri-containerd-170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614.scope - libcontainer container 170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614. Mar 25 01:18:25.305537 containerd[1757]: time="2025-03-25T01:18:25.305492253Z" level=info msg="StartContainer for \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" returns successfully" Mar 25 01:18:26.859947 containerd[1757]: time="2025-03-25T01:18:26.859898145Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:18:26.862291 systemd[1]: cri-containerd-170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614.scope: Deactivated successfully. Mar 25 01:18:26.862536 systemd[1]: cri-containerd-170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614.scope: Consumed 371ms CPU time, 170.4M memory peak, 150.3M written to disk. Mar 25 01:18:26.863845 containerd[1757]: time="2025-03-25T01:18:26.863722064Z" level=info msg="received exit event container_id:\"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" id:\"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" pid:4012 exited_at:{seconds:1742865506 nanos:863446944}" Mar 25 01:18:26.863845 containerd[1757]: time="2025-03-25T01:18:26.863809584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" id:\"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" pid:4012 exited_at:{seconds:1742865506 nanos:863446944}" Mar 25 01:18:26.881099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614-rootfs.mount: Deactivated successfully. Mar 25 01:18:26.945731 kubelet[3333]: I0325 01:18:26.945317 3333 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 25 01:18:27.150377 kubelet[3333]: I0325 01:18:27.057111 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lkfg\" (UniqueName: \"kubernetes.io/projected/f0e191f4-a783-46bf-8a30-3f909e5d01f7-kube-api-access-8lkfg\") pod \"coredns-668d6bf9bc-c4w8j\" (UID: \"f0e191f4-a783-46bf-8a30-3f909e5d01f7\") " pod="kube-system/coredns-668d6bf9bc-c4w8j" Mar 25 01:18:27.150377 kubelet[3333]: I0325 01:18:27.057147 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncwt4\" (UniqueName: \"kubernetes.io/projected/40b8113e-8b14-47e0-bcc0-3d1d07612344-kube-api-access-ncwt4\") pod \"coredns-668d6bf9bc-k5kmp\" (UID: \"40b8113e-8b14-47e0-bcc0-3d1d07612344\") " pod="kube-system/coredns-668d6bf9bc-k5kmp" Mar 25 01:18:27.150377 kubelet[3333]: I0325 01:18:27.057177 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kgb\" (UniqueName: \"kubernetes.io/projected/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-kube-api-access-h5kgb\") pod \"calico-kube-controllers-5f856dbd65-p9hgt\" (UID: \"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3\") " pod="calico-system/calico-kube-controllers-5f856dbd65-p9hgt" Mar 25 01:18:27.150377 kubelet[3333]: I0325 01:18:27.057201 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a085748-6d3d-40b0-9ada-ed65cd6602a7-calico-apiserver-certs\") pod \"calico-apiserver-85d9d7df7-t5jcx\" (UID: \"3a085748-6d3d-40b0-9ada-ed65cd6602a7\") " pod="calico-apiserver/calico-apiserver-85d9d7df7-t5jcx" Mar 25 01:18:27.150377 kubelet[3333]: I0325 01:18:27.057229 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6qg\" (UniqueName: \"kubernetes.io/projected/3a085748-6d3d-40b0-9ada-ed65cd6602a7-kube-api-access-7c6qg\") pod \"calico-apiserver-85d9d7df7-t5jcx\" (UID: \"3a085748-6d3d-40b0-9ada-ed65cd6602a7\") " pod="calico-apiserver/calico-apiserver-85d9d7df7-t5jcx" Mar 25 01:18:26.990376 systemd[1]: Created slice kubepods-burstable-podf0e191f4_a783_46bf_8a30_3f909e5d01f7.slice - libcontainer container kubepods-burstable-podf0e191f4_a783_46bf_8a30_3f909e5d01f7.slice. Mar 25 01:18:27.150930 kubelet[3333]: I0325 01:18:27.057247 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40b8113e-8b14-47e0-bcc0-3d1d07612344-config-volume\") pod \"coredns-668d6bf9bc-k5kmp\" (UID: \"40b8113e-8b14-47e0-bcc0-3d1d07612344\") " pod="kube-system/coredns-668d6bf9bc-k5kmp" Mar 25 01:18:27.150930 kubelet[3333]: I0325 01:18:27.057314 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjb8k\" (UniqueName: \"kubernetes.io/projected/8544a506-2dac-42c8-8854-ee9ceda35a43-kube-api-access-tjb8k\") pod \"calico-apiserver-85d9d7df7-rbgrp\" (UID: \"8544a506-2dac-42c8-8854-ee9ceda35a43\") " pod="calico-apiserver/calico-apiserver-85d9d7df7-rbgrp" Mar 25 01:18:27.150930 kubelet[3333]: I0325 01:18:27.057339 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e191f4-a783-46bf-8a30-3f909e5d01f7-config-volume\") pod \"coredns-668d6bf9bc-c4w8j\" (UID: \"f0e191f4-a783-46bf-8a30-3f909e5d01f7\") " pod="kube-system/coredns-668d6bf9bc-c4w8j" Mar 25 01:18:27.150930 kubelet[3333]: I0325 01:18:27.057361 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-tigera-ca-bundle\") pod \"calico-kube-controllers-5f856dbd65-p9hgt\" (UID: \"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3\") " pod="calico-system/calico-kube-controllers-5f856dbd65-p9hgt" Mar 25 01:18:27.150930 kubelet[3333]: I0325 01:18:27.057386 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8544a506-2dac-42c8-8854-ee9ceda35a43-calico-apiserver-certs\") pod \"calico-apiserver-85d9d7df7-rbgrp\" (UID: \"8544a506-2dac-42c8-8854-ee9ceda35a43\") " pod="calico-apiserver/calico-apiserver-85d9d7df7-rbgrp" Mar 25 01:18:27.002611 systemd[1]: Created slice kubepods-besteffort-pod8544a506_2dac_42c8_8854_ee9ceda35a43.slice - libcontainer container kubepods-besteffort-pod8544a506_2dac_42c8_8854_ee9ceda35a43.slice. Mar 25 01:18:27.011315 systemd[1]: Created slice kubepods-besteffort-pod6fe0d135_10c8_485b_b95f_2f53eb1fd5f3.slice - libcontainer container kubepods-besteffort-pod6fe0d135_10c8_485b_b95f_2f53eb1fd5f3.slice. Mar 25 01:18:27.025292 systemd[1]: Created slice kubepods-burstable-pod40b8113e_8b14_47e0_bcc0_3d1d07612344.slice - libcontainer container kubepods-burstable-pod40b8113e_8b14_47e0_bcc0_3d1d07612344.slice. Mar 25 01:18:27.031312 systemd[1]: Created slice kubepods-besteffort-pod3a085748_6d3d_40b0_9ada_ed65cd6602a7.slice - libcontainer container kubepods-besteffort-pod3a085748_6d3d_40b0_9ada_ed65cd6602a7.slice. Mar 25 01:18:27.096014 systemd[1]: Created slice kubepods-besteffort-poddda6a7d3_a9dd_4cc3_a459_07b4c5111bbc.slice - libcontainer container kubepods-besteffort-poddda6a7d3_a9dd_4cc3_a459_07b4c5111bbc.slice. Mar 25 01:18:27.151949 containerd[1757]: time="2025-03-25T01:18:27.151892047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6vfh,Uid:dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc,Namespace:calico-system,Attempt:0,}" Mar 25 01:18:27.451471 containerd[1757]: time="2025-03-25T01:18:27.451371107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4w8j,Uid:f0e191f4-a783-46bf-8a30-3f909e5d01f7,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:27.454379 containerd[1757]: time="2025-03-25T01:18:27.454209267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f856dbd65-p9hgt,Uid:6fe0d135-10c8-485b-b95f-2f53eb1fd5f3,Namespace:calico-system,Attempt:0,}" Mar 25 01:18:27.454379 containerd[1757]: time="2025-03-25T01:18:27.454213907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kmp,Uid:40b8113e-8b14-47e0-bcc0-3d1d07612344,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:27.454634 containerd[1757]: time="2025-03-25T01:18:27.454590987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-rbgrp,Uid:8544a506-2dac-42c8-8854-ee9ceda35a43,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:18:27.455093 containerd[1757]: time="2025-03-25T01:18:27.455064427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-t5jcx,Uid:3a085748-6d3d-40b0-9ada-ed65cd6602a7,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:18:27.853743 containerd[1757]: time="2025-03-25T01:18:27.853630108Z" level=error msg="Failed to destroy network for sandbox \"0df7daad09b01304660b3d01e25fa2561ff589c27cc1ad14b8989e37568c582c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.861187 containerd[1757]: time="2025-03-25T01:18:27.859940866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6vfh,Uid:dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0df7daad09b01304660b3d01e25fa2561ff589c27cc1ad14b8989e37568c582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.861526 kubelet[3333]: E0325 01:18:27.860193 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0df7daad09b01304660b3d01e25fa2561ff589c27cc1ad14b8989e37568c582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.861526 kubelet[3333]: E0325 01:18:27.860271 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0df7daad09b01304660b3d01e25fa2561ff589c27cc1ad14b8989e37568c582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d6vfh" Mar 25 01:18:27.861526 kubelet[3333]: E0325 01:18:27.860290 3333 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0df7daad09b01304660b3d01e25fa2561ff589c27cc1ad14b8989e37568c582c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d6vfh" Mar 25 01:18:27.861709 kubelet[3333]: E0325 01:18:27.860329 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d6vfh_calico-system(dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d6vfh_calico-system(dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0df7daad09b01304660b3d01e25fa2561ff589c27cc1ad14b8989e37568c582c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d6vfh" podUID="dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc" Mar 25 01:18:27.916628 containerd[1757]: time="2025-03-25T01:18:27.914407535Z" level=error msg="Failed to destroy network for sandbox \"f9f25c0d9abd0941c5257d3fe5fd4f311bd008beb0b287080e6e70b5b8d1c94d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.917343 systemd[1]: run-netns-cni\x2dad67177e\x2defb4\x2d3e85\x2df7e4\x2d4d1dca7076ae.mount: Deactivated successfully. Mar 25 01:18:27.919417 containerd[1757]: time="2025-03-25T01:18:27.919377455Z" level=error msg="Failed to destroy network for sandbox \"c45acc869955cfca972a9a1a0c922374770867288db70eca1de130802dae7dc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.923550 systemd[1]: run-netns-cni\x2d9e8ecc04\x2d3f40\x2d2b53\x2dd885\x2d40c45362b063.mount: Deactivated successfully. Mar 25 01:18:27.926378 containerd[1757]: time="2025-03-25T01:18:27.926318493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-rbgrp,Uid:8544a506-2dac-42c8-8854-ee9ceda35a43,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f25c0d9abd0941c5257d3fe5fd4f311bd008beb0b287080e6e70b5b8d1c94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.926930 kubelet[3333]: E0325 01:18:27.926753 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f25c0d9abd0941c5257d3fe5fd4f311bd008beb0b287080e6e70b5b8d1c94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.926930 kubelet[3333]: E0325 01:18:27.926815 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f25c0d9abd0941c5257d3fe5fd4f311bd008beb0b287080e6e70b5b8d1c94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d9d7df7-rbgrp" Mar 25 01:18:27.926930 kubelet[3333]: E0325 01:18:27.926837 3333 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f25c0d9abd0941c5257d3fe5fd4f311bd008beb0b287080e6e70b5b8d1c94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d9d7df7-rbgrp" Mar 25 01:18:27.927083 kubelet[3333]: E0325 01:18:27.926888 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d9d7df7-rbgrp_calico-apiserver(8544a506-2dac-42c8-8854-ee9ceda35a43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d9d7df7-rbgrp_calico-apiserver(8544a506-2dac-42c8-8854-ee9ceda35a43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9f25c0d9abd0941c5257d3fe5fd4f311bd008beb0b287080e6e70b5b8d1c94d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d9d7df7-rbgrp" podUID="8544a506-2dac-42c8-8854-ee9ceda35a43" Mar 25 01:18:27.930104 containerd[1757]: time="2025-03-25T01:18:27.930057812Z" level=error msg="Failed to destroy network for sandbox \"e15c18ea68c1c4e581649e866f400111819da6def8d765d4e6b1ef489ee3add0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.932336 containerd[1757]: time="2025-03-25T01:18:27.932285172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-t5jcx,Uid:3a085748-6d3d-40b0-9ada-ed65cd6602a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45acc869955cfca972a9a1a0c922374770867288db70eca1de130802dae7dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.933512 systemd[1]: run-netns-cni\x2dd486fdd6\x2dde2c\x2d840e\x2d3456\x2d25b893faa152.mount: Deactivated successfully. Mar 25 01:18:27.933739 kubelet[3333]: E0325 01:18:27.933650 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45acc869955cfca972a9a1a0c922374770867288db70eca1de130802dae7dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.933739 kubelet[3333]: E0325 01:18:27.933711 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45acc869955cfca972a9a1a0c922374770867288db70eca1de130802dae7dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d9d7df7-t5jcx" Mar 25 01:18:27.933739 kubelet[3333]: E0325 01:18:27.933730 3333 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45acc869955cfca972a9a1a0c922374770867288db70eca1de130802dae7dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d9d7df7-t5jcx" Mar 25 01:18:27.933925 kubelet[3333]: E0325 01:18:27.933779 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d9d7df7-t5jcx_calico-apiserver(3a085748-6d3d-40b0-9ada-ed65cd6602a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d9d7df7-t5jcx_calico-apiserver(3a085748-6d3d-40b0-9ada-ed65cd6602a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c45acc869955cfca972a9a1a0c922374770867288db70eca1de130802dae7dc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d9d7df7-t5jcx" podUID="3a085748-6d3d-40b0-9ada-ed65cd6602a7" Mar 25 01:18:27.939666 containerd[1757]: time="2025-03-25T01:18:27.939067411Z" level=error msg="Failed to destroy network for sandbox \"1b317a9f2b66ba64a0c73fe9ab198e33143af8bb71de60d391c566a448bf757a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.940230 containerd[1757]: time="2025-03-25T01:18:27.939923290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f856dbd65-p9hgt,Uid:6fe0d135-10c8-485b-b95f-2f53eb1fd5f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e15c18ea68c1c4e581649e866f400111819da6def8d765d4e6b1ef489ee3add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.943620 kubelet[3333]: E0325 01:18:27.940203 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e15c18ea68c1c4e581649e866f400111819da6def8d765d4e6b1ef489ee3add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.943620 kubelet[3333]: E0325 01:18:27.940875 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e15c18ea68c1c4e581649e866f400111819da6def8d765d4e6b1ef489ee3add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f856dbd65-p9hgt" Mar 25 01:18:27.943620 kubelet[3333]: E0325 01:18:27.940895 3333 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e15c18ea68c1c4e581649e866f400111819da6def8d765d4e6b1ef489ee3add0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f856dbd65-p9hgt" Mar 25 01:18:27.942340 systemd[1]: run-netns-cni\x2dbe351543\x2debd9\x2d887c\x2d23e8\x2dc5885e5e7f66.mount: Deactivated successfully. Mar 25 01:18:27.943822 kubelet[3333]: E0325 01:18:27.940943 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f856dbd65-p9hgt_calico-system(6fe0d135-10c8-485b-b95f-2f53eb1fd5f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f856dbd65-p9hgt_calico-system(6fe0d135-10c8-485b-b95f-2f53eb1fd5f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e15c18ea68c1c4e581649e866f400111819da6def8d765d4e6b1ef489ee3add0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f856dbd65-p9hgt" podUID="6fe0d135-10c8-485b-b95f-2f53eb1fd5f3" Mar 25 01:18:27.948913 containerd[1757]: time="2025-03-25T01:18:27.948743969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4w8j,Uid:f0e191f4-a783-46bf-8a30-3f909e5d01f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b317a9f2b66ba64a0c73fe9ab198e33143af8bb71de60d391c566a448bf757a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.949709 kubelet[3333]: E0325 01:18:27.949442 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b317a9f2b66ba64a0c73fe9ab198e33143af8bb71de60d391c566a448bf757a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.949709 kubelet[3333]: E0325 01:18:27.949492 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b317a9f2b66ba64a0c73fe9ab198e33143af8bb71de60d391c566a448bf757a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c4w8j" Mar 25 01:18:27.949709 kubelet[3333]: E0325 01:18:27.949510 3333 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b317a9f2b66ba64a0c73fe9ab198e33143af8bb71de60d391c566a448bf757a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c4w8j" Mar 25 01:18:27.950156 kubelet[3333]: E0325 01:18:27.949543 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c4w8j_kube-system(f0e191f4-a783-46bf-8a30-3f909e5d01f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c4w8j_kube-system(f0e191f4-a783-46bf-8a30-3f909e5d01f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b317a9f2b66ba64a0c73fe9ab198e33143af8bb71de60d391c566a448bf757a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-c4w8j" podUID="f0e191f4-a783-46bf-8a30-3f909e5d01f7" Mar 25 01:18:27.950323 containerd[1757]: time="2025-03-25T01:18:27.949815568Z" level=error msg="Failed to destroy network for sandbox \"e8e567a69f1b495d06aaa548e5801a83dcaade09b8d7af0ff5fb914a013f6d60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.954434 containerd[1757]: time="2025-03-25T01:18:27.954348528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kmp,Uid:40b8113e-8b14-47e0-bcc0-3d1d07612344,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e567a69f1b495d06aaa548e5801a83dcaade09b8d7af0ff5fb914a013f6d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.954775 kubelet[3333]: E0325 01:18:27.954643 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e567a69f1b495d06aaa548e5801a83dcaade09b8d7af0ff5fb914a013f6d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:27.954775 kubelet[3333]: E0325 01:18:27.954688 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e567a69f1b495d06aaa548e5801a83dcaade09b8d7af0ff5fb914a013f6d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k5kmp" Mar 25 01:18:27.954775 kubelet[3333]: E0325 01:18:27.954705 3333 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e567a69f1b495d06aaa548e5801a83dcaade09b8d7af0ff5fb914a013f6d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k5kmp" Mar 25 01:18:27.954887 kubelet[3333]: E0325 01:18:27.954737 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k5kmp_kube-system(40b8113e-8b14-47e0-bcc0-3d1d07612344)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k5kmp_kube-system(40b8113e-8b14-47e0-bcc0-3d1d07612344)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8e567a69f1b495d06aaa548e5801a83dcaade09b8d7af0ff5fb914a013f6d60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k5kmp" podUID="40b8113e-8b14-47e0-bcc0-3d1d07612344" Mar 25 01:18:28.233194 containerd[1757]: time="2025-03-25T01:18:28.231793192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 25 01:18:28.881525 systemd[1]: run-netns-cni\x2db2070790\x2d9ca0\x2db5e4\x2db74e\x2da373217b4dc1.mount: Deactivated successfully. Mar 25 01:18:32.289375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322135305.mount: Deactivated successfully. Mar 25 01:18:37.393937 containerd[1757]: time="2025-03-25T01:18:37.393882652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:37.456037 containerd[1757]: time="2025-03-25T01:18:37.455966391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 25 01:18:37.499986 containerd[1757]: time="2025-03-25T01:18:37.499912976Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:37.506038 containerd[1757]: time="2025-03-25T01:18:37.505969734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:37.506879 containerd[1757]: time="2025-03-25T01:18:37.506479093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 9.274644301s" Mar 25 01:18:37.506879 containerd[1757]: time="2025-03-25T01:18:37.506515133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 25 01:18:37.519765 containerd[1757]: time="2025-03-25T01:18:37.519719329Z" level=info msg="CreateContainer within sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 01:18:37.695845 containerd[1757]: time="2025-03-25T01:18:37.695733668Z" level=info msg="Container 75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:37.855422 containerd[1757]: time="2025-03-25T01:18:37.855375254Z" level=info msg="CreateContainer within sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\"" Mar 25 01:18:37.856365 containerd[1757]: time="2025-03-25T01:18:37.856082293Z" level=info msg="StartContainer for \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\"" Mar 25 01:18:37.858379 containerd[1757]: time="2025-03-25T01:18:37.858350493Z" level=info msg="connecting to shim 75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a" address="unix:///run/containerd/s/00366e12dd770dac40ce135b8c9ff24a0641df697ac561247b001313ad0e7539" protocol=ttrpc version=3 Mar 25 01:18:37.882800 systemd[1]: Started cri-containerd-75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a.scope - libcontainer container 75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a. Mar 25 01:18:37.928862 containerd[1757]: time="2025-03-25T01:18:37.928812228Z" level=info msg="StartContainer for \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" returns successfully" Mar 25 01:18:38.092768 containerd[1757]: time="2025-03-25T01:18:38.092425492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kmp,Uid:40b8113e-8b14-47e0-bcc0-3d1d07612344,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:38.164752 containerd[1757]: time="2025-03-25T01:18:38.164696107Z" level=error msg="Failed to destroy network for sandbox \"4548823ab0b6d47d69653cc4471f8e51d072cec84219f48b00a4a46b9f280463\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:38.193543 containerd[1757]: time="2025-03-25T01:18:38.193487538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kmp,Uid:40b8113e-8b14-47e0-bcc0-3d1d07612344,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4548823ab0b6d47d69653cc4471f8e51d072cec84219f48b00a4a46b9f280463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:38.193749 kubelet[3333]: E0325 01:18:38.193713 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4548823ab0b6d47d69653cc4471f8e51d072cec84219f48b00a4a46b9f280463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:18:38.194024 kubelet[3333]: E0325 01:18:38.193767 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4548823ab0b6d47d69653cc4471f8e51d072cec84219f48b00a4a46b9f280463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k5kmp" Mar 25 01:18:38.194024 kubelet[3333]: E0325 01:18:38.193796 3333 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4548823ab0b6d47d69653cc4471f8e51d072cec84219f48b00a4a46b9f280463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k5kmp" Mar 25 01:18:38.194024 kubelet[3333]: E0325 01:18:38.193832 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k5kmp_kube-system(40b8113e-8b14-47e0-bcc0-3d1d07612344)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k5kmp_kube-system(40b8113e-8b14-47e0-bcc0-3d1d07612344)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4548823ab0b6d47d69653cc4471f8e51d072cec84219f48b00a4a46b9f280463\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k5kmp" podUID="40b8113e-8b14-47e0-bcc0-3d1d07612344" Mar 25 01:18:38.228017 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 25 01:18:38.228133 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 25 01:18:38.282329 kubelet[3333]: I0325 01:18:38.281849 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7znqv" podStartSLOduration=1.508765838 podStartE2EDuration="21.281832707s" podCreationTimestamp="2025-03-25 01:18:17 +0000 UTC" firstStartedPulling="2025-03-25 01:18:17.734256264 +0000 UTC m=+13.742586578" lastFinishedPulling="2025-03-25 01:18:37.507323093 +0000 UTC m=+33.515653447" observedRunningTime="2025-03-25 01:18:38.278098709 +0000 UTC m=+34.286429063" watchObservedRunningTime="2025-03-25 01:18:38.281832707 +0000 UTC m=+34.290163061" Mar 25 01:18:38.361189 containerd[1757]: time="2025-03-25T01:18:38.361059720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" id:\"730a13d3fe7a784160257f0dc56d6250212f1de16ab28b83850047dbbb271c54\" pid:4318 exit_status:1 exited_at:{seconds:1742865518 nanos:360240560}" Mar 25 01:18:38.512199 systemd[1]: run-netns-cni\x2ded362c7b\x2d26e4\x2d8986\x2d43d5\x2d6d32e643cd65.mount: Deactivated successfully. Mar 25 01:18:39.092805 containerd[1757]: time="2025-03-25T01:18:39.092742989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f856dbd65-p9hgt,Uid:6fe0d135-10c8-485b-b95f-2f53eb1fd5f3,Namespace:calico-system,Attempt:0,}" Mar 25 01:18:39.302508 systemd-networkd[1452]: calidf092883c6f: Link UP Mar 25 01:18:39.302865 systemd-networkd[1452]: calidf092883c6f: Gained carrier Mar 25 01:18:39.330959 containerd[1757]: 2025-03-25 01:18:39.162 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 01:18:39.330959 containerd[1757]: 2025-03-25 01:18:39.178 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0 calico-kube-controllers-5f856dbd65- calico-system 6fe0d135-10c8-485b-b95f-2f53eb1fd5f3 697 0 2025-03-25 01:18:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f856dbd65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-a-f1ebfb6c0b calico-kube-controllers-5f856dbd65-p9hgt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidf092883c6f [] []}} ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-" Mar 25 01:18:39.330959 containerd[1757]: 2025-03-25 01:18:39.178 [INFO][4348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:18:39.330959 containerd[1757]: 2025-03-25 01:18:39.203 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.216 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d500), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-a-f1ebfb6c0b", "pod":"calico-kube-controllers-5f856dbd65-p9hgt", "timestamp":"2025-03-25 01:18:39.203263711 +0000 UTC"}, Hostname:"ci-4284.0.0-a-f1ebfb6c0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.216 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.216 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.216 [INFO][4360] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-f1ebfb6c0b' Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.221 [INFO][4360] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.230 [INFO][4360] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.240 [INFO][4360] ipam/ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.243 [INFO][4360] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.332820 containerd[1757]: 2025-03-25 01:18:39.245 [INFO][4360] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.333042 containerd[1757]: 2025-03-25 01:18:39.246 [INFO][4360] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.333042 containerd[1757]: 2025-03-25 01:18:39.248 [INFO][4360] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2 Mar 25 01:18:39.333042 containerd[1757]: 2025-03-25 01:18:39.264 [INFO][4360] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.333042 containerd[1757]: 2025-03-25 01:18:39.273 [INFO][4360] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.1/26] block=192.168.74.0/26 handle="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.333042 containerd[1757]: 2025-03-25 01:18:39.274 [INFO][4360] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.1/26] handle="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:39.333042 containerd[1757]: 2025-03-25 01:18:39.274 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:18:39.333042 containerd[1757]: 2025-03-25 01:18:39.274 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.1/26] IPv6=[] ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:18:39.333209 containerd[1757]: 2025-03-25 01:18:39.279 [INFO][4348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0", GenerateName:"calico-kube-controllers-5f856dbd65-", Namespace:"calico-system", SelfLink:"", UID:"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f856dbd65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"", Pod:"calico-kube-controllers-5f856dbd65-p9hgt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidf092883c6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:39.333259 containerd[1757]: 2025-03-25 01:18:39.279 [INFO][4348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.1/32] ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:18:39.333259 containerd[1757]: 2025-03-25 01:18:39.279 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf092883c6f ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:18:39.333259 containerd[1757]: 2025-03-25 01:18:39.303 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:18:39.333322 containerd[1757]: 2025-03-25 01:18:39.303 [INFO][4348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0", GenerateName:"calico-kube-controllers-5f856dbd65-", Namespace:"calico-system", SelfLink:"", UID:"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f856dbd65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2", Pod:"calico-kube-controllers-5f856dbd65-p9hgt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidf092883c6f", MAC:"a2:6e:97:72:a4:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:39.333367 containerd[1757]: 2025-03-25 01:18:39.322 [INFO][4348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Namespace="calico-system" Pod="calico-kube-controllers-5f856dbd65-p9hgt" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:18:39.350719 kubelet[3333]: I0325 01:18:39.349722 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:18:39.367851 containerd[1757]: time="2025-03-25T01:18:39.367807454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" id:\"1fcb342c72123cc3d92c0d856d21480affa16cc08e4fcfe659fd86fd1ea689a3\" pid:4378 exit_status:1 exited_at:{seconds:1742865519 nanos:367500575}" Mar 25 01:18:39.643206 containerd[1757]: time="2025-03-25T01:18:39.643110200Z" level=info msg="connecting to shim 230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" address="unix:///run/containerd/s/0389e3eeb0b404b928ad4a865ac4c2ba2c05ba3fa79d97a5d560924b5394ce2d" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:39.704450 systemd[1]: Started cri-containerd-230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2.scope - libcontainer container 230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2. Mar 25 01:18:39.994633 kernel: bpftool[4565]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 25 01:18:40.045249 containerd[1757]: time="2025-03-25T01:18:40.045204381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f856dbd65-p9hgt,Uid:6fe0d135-10c8-485b-b95f-2f53eb1fd5f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\"" Mar 25 01:18:40.048193 containerd[1757]: time="2025-03-25T01:18:40.048157700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 25 01:18:40.181262 systemd-networkd[1452]: vxlan.calico: Link UP Mar 25 01:18:40.181270 systemd-networkd[1452]: vxlan.calico: Gained carrier Mar 25 01:18:40.800806 systemd-networkd[1452]: calidf092883c6f: Gained IPv6LL Mar 25 01:18:41.092093 containerd[1757]: time="2025-03-25T01:18:41.092053734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-rbgrp,Uid:8544a506-2dac-42c8-8854-ee9ceda35a43,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:18:41.442397 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL Mar 25 01:18:42.092635 containerd[1757]: time="2025-03-25T01:18:42.092421384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6vfh,Uid:dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc,Namespace:calico-system,Attempt:0,}" Mar 25 01:18:43.092329 containerd[1757]: time="2025-03-25T01:18:43.092283674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-t5jcx,Uid:3a085748-6d3d-40b0-9ada-ed65cd6602a7,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:18:43.092724 containerd[1757]: time="2025-03-25T01:18:43.092283994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4w8j,Uid:f0e191f4-a783-46bf-8a30-3f909e5d01f7,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:45.924240 waagent[1984]: 2025-03-25T01:18:45.923522Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 25 01:18:45.931635 waagent[1984]: 2025-03-25T01:18:45.930661Z INFO ExtHandler Mar 25 01:18:45.931635 waagent[1984]: 2025-03-25T01:18:45.930753Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5d31e80b-5ef2-4e4c-8bb5-d05fd26541fb eTag: 10948808332763155205 source: Fabric] Mar 25 01:18:45.931635 waagent[1984]: 2025-03-25T01:18:45.931030Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 25 01:18:45.931635 waagent[1984]: 2025-03-25T01:18:45.931552Z INFO ExtHandler Mar 25 01:18:45.931848 waagent[1984]: 2025-03-25T01:18:45.931819Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 25 01:18:45.936903 waagent[1984]: 2025-03-25T01:18:45.936878Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 25 01:18:46.147665 waagent[1984]: 2025-03-25T01:18:46.147549Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFABEE6DF053D4991E3065E7B05198D16662D210', 'hasPrivateKey': True} Mar 25 01:18:46.148271 waagent[1984]: 2025-03-25T01:18:46.148221Z INFO ExtHandler Downloaded certificate {'thumbprint': '390331DC9C27F8A789FCA5ED6F9EA7227A5E3AE3', 'hasPrivateKey': False} Mar 25 01:18:46.149125 waagent[1984]: 2025-03-25T01:18:46.148690Z INFO ExtHandler Fetch goal state completed Mar 25 01:18:46.149125 waagent[1984]: 2025-03-25T01:18:46.149076Z INFO ExtHandler ExtHandler Mar 25 01:18:46.149676 waagent[1984]: 2025-03-25T01:18:46.149634Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: e31dbdb3-e253-4b05-9afd-05968522a573 correlation a858468c-4f12-4c0f-ae07-11638ec55349 created: 2025-03-25T01:18:32.474419Z] Mar 25 01:18:46.150007 waagent[1984]: 2025-03-25T01:18:46.149972Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 25 01:18:46.150636 waagent[1984]: 2025-03-25T01:18:46.150558Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 25 01:18:49.427486 systemd-networkd[1452]: calibe1e40bdb21: Link UP Mar 25 01:18:49.429304 systemd-networkd[1452]: calibe1e40bdb21: Gained carrier Mar 25 01:18:49.454078 containerd[1757]: 2025-03-25 01:18:49.329 [INFO][4655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0 calico-apiserver-85d9d7df7- calico-apiserver 8544a506-2dac-42c8-8854-ee9ceda35a43 694 0 2025-03-25 01:18:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d9d7df7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-a-f1ebfb6c0b calico-apiserver-85d9d7df7-rbgrp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe1e40bdb21 [] []}} ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-" Mar 25 01:18:49.454078 containerd[1757]: 2025-03-25 01:18:49.329 [INFO][4655] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" Mar 25 01:18:49.454078 containerd[1757]: 2025-03-25 01:18:49.357 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" HandleID="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.371 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" HandleID="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000498860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-a-f1ebfb6c0b", "pod":"calico-apiserver-85d9d7df7-rbgrp", "timestamp":"2025-03-25 01:18:49.357021315 +0000 UTC"}, Hostname:"ci-4284.0.0-a-f1ebfb6c0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.371 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.371 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.371 [INFO][4668] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-f1ebfb6c0b' Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.373 [INFO][4668] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.378 [INFO][4668] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.383 [INFO][4668] ipam/ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.384 [INFO][4668] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454734 containerd[1757]: 2025-03-25 01:18:49.388 [INFO][4668] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454933 containerd[1757]: 2025-03-25 01:18:49.388 [INFO][4668] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454933 containerd[1757]: 2025-03-25 01:18:49.390 [INFO][4668] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058 Mar 25 01:18:49.454933 containerd[1757]: 2025-03-25 01:18:49.397 [INFO][4668] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454933 containerd[1757]: 2025-03-25 01:18:49.411 [INFO][4668] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.2/26] block=192.168.74.0/26 handle="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454933 containerd[1757]: 2025-03-25 01:18:49.411 [INFO][4668] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.2/26] handle="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.454933 containerd[1757]: 2025-03-25 01:18:49.411 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:18:49.454933 containerd[1757]: 2025-03-25 01:18:49.411 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.2/26] IPv6=[] ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" HandleID="k8s-pod-network.797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" Mar 25 01:18:49.455068 containerd[1757]: 2025-03-25 01:18:49.418 [INFO][4655] cni-plugin/k8s.go 386: Populated endpoint ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0", GenerateName:"calico-apiserver-85d9d7df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8544a506-2dac-42c8-8854-ee9ceda35a43", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d9d7df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"", Pod:"calico-apiserver-85d9d7df7-rbgrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe1e40bdb21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:49.455117 containerd[1757]: 2025-03-25 01:18:49.418 [INFO][4655] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.2/32] ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" Mar 25 01:18:49.455117 containerd[1757]: 2025-03-25 01:18:49.419 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe1e40bdb21 ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" Mar 25 01:18:49.455117 containerd[1757]: 2025-03-25 01:18:49.430 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" Mar 25 01:18:49.455178 containerd[1757]: 2025-03-25 01:18:49.432 [INFO][4655] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0", GenerateName:"calico-apiserver-85d9d7df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8544a506-2dac-42c8-8854-ee9ceda35a43", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d9d7df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058", Pod:"calico-apiserver-85d9d7df7-rbgrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe1e40bdb21", MAC:"76:41:14:ee:6f:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:49.455223 containerd[1757]: 2025-03-25 01:18:49.447 [INFO][4655] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-rbgrp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--rbgrp-eth0" Mar 25 01:18:49.608840 systemd-networkd[1452]: cali055a58b2628: Link UP Mar 25 01:18:49.610221 systemd-networkd[1452]: cali055a58b2628: Gained carrier Mar 25 01:18:50.453789 containerd[1757]: 2025-03-25 01:18:49.395 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0 csi-node-driver- calico-system dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc 576 0 2025-03-25 01:18:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-a-f1ebfb6c0b csi-node-driver-d6vfh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali055a58b2628 [] []}} ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-" Mar 25 01:18:50.453789 containerd[1757]: 2025-03-25 01:18:49.395 [INFO][4673] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" Mar 25 01:18:50.453789 containerd[1757]: 2025-03-25 01:18:49.451 [INFO][4697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" HandleID="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" Mar 25 01:18:49.710028 systemd-networkd[1452]: caliaa4249913b9: Link UP Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.472 [INFO][4697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" HandleID="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319500), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-a-f1ebfb6c0b", "pod":"csi-node-driver-d6vfh", "timestamp":"2025-03-25 01:18:49.451562601 +0000 UTC"}, Hostname:"ci-4284.0.0-a-f1ebfb6c0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.472 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.473 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.473 [INFO][4697] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-f1ebfb6c0b' Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.475 [INFO][4697] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.571 [INFO][4697] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.577 [INFO][4697] ipam/ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.579 [INFO][4697] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.453965 containerd[1757]: 2025-03-25 01:18:49.582 [INFO][4697] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:49.710443 systemd-networkd[1452]: caliaa4249913b9: Gained carrier Mar 25 01:18:50.454192 containerd[1757]: 2025-03-25 01:18:49.582 [INFO][4697] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454192 containerd[1757]: 2025-03-25 01:18:49.583 [INFO][4697] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729 Mar 25 01:18:50.454192 containerd[1757]: 2025-03-25 01:18:49.593 [INFO][4697] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454192 containerd[1757]: 2025-03-25 01:18:49.599 [INFO][4697] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.3/26] block=192.168.74.0/26 handle="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454192 containerd[1757]: 2025-03-25 01:18:49.599 [INFO][4697] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.3/26] handle="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454192 containerd[1757]: 2025-03-25 01:18:49.599 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:18:50.454192 containerd[1757]: 2025-03-25 01:18:49.599 [INFO][4697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.3/26] IPv6=[] ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" HandleID="k8s-pod-network.41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" Mar 25 01:18:49.805758 systemd-networkd[1452]: caliee991a2c20a: Link UP Mar 25 01:18:50.454360 containerd[1757]: 2025-03-25 01:18:49.602 [INFO][4673] cni-plugin/k8s.go 386: Populated endpoint ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc", ResourceVersion:"576", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"", Pod:"csi-node-driver-d6vfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali055a58b2628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:49.807048 systemd-networkd[1452]: caliee991a2c20a: Gained carrier Mar 25 01:18:50.454612 containerd[1757]: 2025-03-25 01:18:49.603 [INFO][4673] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.3/32] ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" Mar 25 01:18:50.454612 containerd[1757]: 2025-03-25 01:18:49.603 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali055a58b2628 ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" Mar 25 01:18:50.454612 containerd[1757]: 2025-03-25 01:18:49.609 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" Mar 25 01:18:50.454682 containerd[1757]: 2025-03-25 01:18:49.612 [INFO][4673] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc", ResourceVersion:"576", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729", Pod:"csi-node-driver-d6vfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali055a58b2628", MAC:"e6:6a:b7:ab:13:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:50.454737 containerd[1757]: 2025-03-25 01:18:49.627 [INFO][4673] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" Namespace="calico-system" Pod="csi-node-driver-d6vfh" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-csi--node--driver--d6vfh-eth0" Mar 25 01:18:50.454737 containerd[1757]: 2025-03-25 01:18:49.496 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0 coredns-668d6bf9bc- kube-system f0e191f4-a783-46bf-8a30-3f909e5d01f7 689 0 2025-03-25 01:18:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-a-f1ebfb6c0b coredns-668d6bf9bc-c4w8j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa4249913b9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-" Mar 25 01:18:50.454737 containerd[1757]: 2025-03-25 01:18:49.496 [INFO][4688] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" Mar 25 01:18:50.454737 containerd[1757]: 2025-03-25 01:18:49.532 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" HandleID="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.573 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" HandleID="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-a-f1ebfb6c0b", "pod":"coredns-668d6bf9bc-c4w8j", "timestamp":"2025-03-25 01:18:49.532632973 +0000 UTC"}, Hostname:"ci-4284.0.0-a-f1ebfb6c0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.573 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.599 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.600 [INFO][4730] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-f1ebfb6c0b' Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.609 [INFO][4730] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.670 [INFO][4730] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.676 [INFO][4730] ipam/ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.678 [INFO][4730] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.454834 containerd[1757]: 2025-03-25 01:18:49.680 [INFO][4730] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455012 containerd[1757]: 2025-03-25 01:18:49.680 [INFO][4730] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455012 containerd[1757]: 2025-03-25 01:18:49.681 [INFO][4730] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b Mar 25 01:18:50.455012 containerd[1757]: 2025-03-25 01:18:49.686 [INFO][4730] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455012 containerd[1757]: 2025-03-25 01:18:49.701 [INFO][4730] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.4/26] block=192.168.74.0/26 handle="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455012 containerd[1757]: 2025-03-25 01:18:49.701 [INFO][4730] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.4/26] handle="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455012 containerd[1757]: 2025-03-25 01:18:49.701 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:18:50.455012 containerd[1757]: 2025-03-25 01:18:49.701 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.4/26] IPv6=[] ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" HandleID="k8s-pod-network.93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" Mar 25 01:18:50.455141 containerd[1757]: 2025-03-25 01:18:49.704 [INFO][4688] cni-plugin/k8s.go 386: Populated endpoint ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0e191f4-a783-46bf-8a30-3f909e5d01f7", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"", Pod:"coredns-668d6bf9bc-c4w8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa4249913b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:50.455141 containerd[1757]: 2025-03-25 01:18:49.705 [INFO][4688] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.4/32] ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" Mar 25 01:18:50.455141 containerd[1757]: 2025-03-25 01:18:49.705 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa4249913b9 ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" Mar 25 01:18:50.455141 containerd[1757]: 2025-03-25 01:18:49.712 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" Mar 25 01:18:50.455141 containerd[1757]: 2025-03-25 01:18:49.714 [INFO][4688] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f0e191f4-a783-46bf-8a30-3f909e5d01f7", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b", Pod:"coredns-668d6bf9bc-c4w8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa4249913b9", MAC:"f6:55:e7:96:e1:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:50.455141 containerd[1757]: 2025-03-25 01:18:49.735 [INFO][4688] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-c4w8j" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--c4w8j-eth0" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.543 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0 calico-apiserver-85d9d7df7- calico-apiserver 3a085748-6d3d-40b0-9ada-ed65cd6602a7 696 0 2025-03-25 01:18:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d9d7df7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-a-f1ebfb6c0b calico-apiserver-85d9d7df7-t5jcx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee991a2c20a [] []}} ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.545 [INFO][4720] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.572 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" HandleID="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.670 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" HandleID="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000361ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-a-f1ebfb6c0b", "pod":"calico-apiserver-85d9d7df7-t5jcx", "timestamp":"2025-03-25 01:18:49.572286199 +0000 UTC"}, Hostname:"ci-4284.0.0-a-f1ebfb6c0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.670 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.701 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.701 [INFO][4740] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-f1ebfb6c0b' Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.711 [INFO][4740] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.770 [INFO][4740] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.777 [INFO][4740] ipam/ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.780 [INFO][4740] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.782 [INFO][4740] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.782 [INFO][4740] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.784 [INFO][4740] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40 Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.789 [INFO][4740] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.799 [INFO][4740] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.5/26] block=192.168.74.0/26 handle="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.800 [INFO][4740] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.5/26] handle="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.800 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:18:50.455330 containerd[1757]: 2025-03-25 01:18:49.800 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.5/26] IPv6=[] ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" HandleID="k8s-pod-network.fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" Mar 25 01:18:50.455731 containerd[1757]: 2025-03-25 01:18:49.802 [INFO][4720] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0", GenerateName:"calico-apiserver-85d9d7df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a085748-6d3d-40b0-9ada-ed65cd6602a7", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d9d7df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"", Pod:"calico-apiserver-85d9d7df7-t5jcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee991a2c20a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:50.455731 containerd[1757]: 2025-03-25 01:18:49.802 [INFO][4720] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.5/32] ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" Mar 25 01:18:50.455731 containerd[1757]: 2025-03-25 01:18:49.802 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee991a2c20a ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" Mar 25 01:18:50.455731 containerd[1757]: 2025-03-25 01:18:49.806 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" Mar 25 01:18:50.455731 containerd[1757]: 2025-03-25 01:18:49.807 [INFO][4720] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0", GenerateName:"calico-apiserver-85d9d7df7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a085748-6d3d-40b0-9ada-ed65cd6602a7", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d9d7df7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40", Pod:"calico-apiserver-85d9d7df7-t5jcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee991a2c20a", MAC:"86:ba:22:63:a8:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:50.455731 containerd[1757]: 2025-03-25 01:18:49.828 [INFO][4720] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" Namespace="calico-apiserver" Pod="calico-apiserver-85d9d7df7-t5jcx" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--apiserver--85d9d7df7--t5jcx-eth0" Mar 25 01:18:50.976857 systemd-networkd[1452]: caliaa4249913b9: Gained IPv6LL Mar 25 01:18:51.104855 systemd-networkd[1452]: cali055a58b2628: Gained IPv6LL Mar 25 01:18:51.169130 systemd-networkd[1452]: calibe1e40bdb21: Gained IPv6LL Mar 25 01:18:51.424816 systemd-networkd[1452]: caliee991a2c20a: Gained IPv6LL Mar 25 01:18:51.737552 containerd[1757]: time="2025-03-25T01:18:51.737221513Z" level=info msg="connecting to shim 797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058" address="unix:///run/containerd/s/a5f5f49553df9eb6762a01ac6ba4dd967bf8cb87c7a3f0842696a1048835ee27" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:51.754717 containerd[1757]: time="2025-03-25T01:18:51.754677107Z" level=info msg="connecting to shim 41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729" address="unix:///run/containerd/s/dcff3eb1fd5a8ae2bd010e46dff349990151cdcb005cdb34febbf776da4d49a8" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:51.763993 containerd[1757]: time="2025-03-25T01:18:51.763949424Z" level=info msg="connecting to shim 93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b" address="unix:///run/containerd/s/2f2ae330009be67f60abb2e550b182150ab495a8c5aae44eb2fa7fa5bfeaa94f" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:51.781877 containerd[1757]: time="2025-03-25T01:18:51.781838497Z" level=info msg="connecting to shim fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40" address="unix:///run/containerd/s/1f11b9c117eb4c90d30768d6308d4a1532983f1baf11c25db117804f409bcb4a" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:51.808815 systemd[1]: Started cri-containerd-797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058.scope - libcontainer container 797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058. Mar 25 01:18:51.816832 systemd[1]: Started cri-containerd-93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b.scope - libcontainer container 93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b. Mar 25 01:18:51.835812 systemd[1]: Started cri-containerd-41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729.scope - libcontainer container 41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729. Mar 25 01:18:51.853898 systemd[1]: Started cri-containerd-fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40.scope - libcontainer container fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40. Mar 25 01:18:51.902699 containerd[1757]: time="2025-03-25T01:18:51.902580615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6vfh,Uid:dda6a7d3-a9dd-4cc3-a459-07b4c5111bbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729\"" Mar 25 01:18:51.914579 containerd[1757]: time="2025-03-25T01:18:51.914168571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4w8j,Uid:f0e191f4-a783-46bf-8a30-3f909e5d01f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b\"" Mar 25 01:18:51.920594 containerd[1757]: time="2025-03-25T01:18:51.920069689Z" level=info msg="CreateContainer within sandbox \"93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:18:51.972337 containerd[1757]: time="2025-03-25T01:18:51.972244430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-t5jcx,Uid:3a085748-6d3d-40b0-9ada-ed65cd6602a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40\"" Mar 25 01:18:51.973386 containerd[1757]: time="2025-03-25T01:18:51.973112230Z" level=info msg="Container 566227539de29351299aa3959246b8afe95c096eefeeb6d7c8ca3c9e3224a078: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:51.998356 containerd[1757]: time="2025-03-25T01:18:51.998230661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d9d7df7-rbgrp,Uid:8544a506-2dac-42c8-8854-ee9ceda35a43,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058\"" Mar 25 01:18:52.014245 containerd[1757]: time="2025-03-25T01:18:52.014199135Z" level=info msg="CreateContainer within sandbox \"93e312b71f5b94d92184fca5f3461f90cf69fc21c164e1b8becf7505a96ff55b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"566227539de29351299aa3959246b8afe95c096eefeeb6d7c8ca3c9e3224a078\"" Mar 25 01:18:52.015690 containerd[1757]: time="2025-03-25T01:18:52.014945895Z" level=info msg="StartContainer for \"566227539de29351299aa3959246b8afe95c096eefeeb6d7c8ca3c9e3224a078\"" Mar 25 01:18:52.016010 containerd[1757]: time="2025-03-25T01:18:52.015974255Z" level=info msg="connecting to shim 566227539de29351299aa3959246b8afe95c096eefeeb6d7c8ca3c9e3224a078" address="unix:///run/containerd/s/2f2ae330009be67f60abb2e550b182150ab495a8c5aae44eb2fa7fa5bfeaa94f" protocol=ttrpc version=3 Mar 25 01:18:52.038810 systemd[1]: Started cri-containerd-566227539de29351299aa3959246b8afe95c096eefeeb6d7c8ca3c9e3224a078.scope - libcontainer container 566227539de29351299aa3959246b8afe95c096eefeeb6d7c8ca3c9e3224a078. Mar 25 01:18:52.070984 containerd[1757]: time="2025-03-25T01:18:52.070952595Z" level=info msg="StartContainer for \"566227539de29351299aa3959246b8afe95c096eefeeb6d7c8ca3c9e3224a078\" returns successfully" Mar 25 01:18:52.298459 kubelet[3333]: I0325 01:18:52.298145 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c4w8j" podStartSLOduration=43.298125795 podStartE2EDuration="43.298125795s" podCreationTimestamp="2025-03-25 01:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:52.297692835 +0000 UTC m=+48.306023189" watchObservedRunningTime="2025-03-25 01:18:52.298125795 +0000 UTC m=+48.306456149" Mar 25 01:18:53.092874 containerd[1757]: time="2025-03-25T01:18:53.092675194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kmp,Uid:40b8113e-8b14-47e0-bcc0-3d1d07612344,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:53.206668 systemd-networkd[1452]: cali7d93427f47d: Link UP Mar 25 01:18:53.206828 systemd-networkd[1452]: cali7d93427f47d: Gained carrier Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.136 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0 coredns-668d6bf9bc- kube-system 40b8113e-8b14-47e0-bcc0-3d1d07612344 695 0 2025-03-25 01:18:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-a-f1ebfb6c0b coredns-668d6bf9bc-k5kmp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d93427f47d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.136 [INFO][5007] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.161 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" HandleID="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.172 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" HandleID="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-a-f1ebfb6c0b", "pod":"coredns-668d6bf9bc-k5kmp", "timestamp":"2025-03-25 01:18:53.16146401 +0000 UTC"}, Hostname:"ci-4284.0.0-a-f1ebfb6c0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.172 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.172 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.172 [INFO][5019] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-f1ebfb6c0b' Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.174 [INFO][5019] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.177 [INFO][5019] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.181 [INFO][5019] ipam/ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.183 [INFO][5019] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.185 [INFO][5019] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.185 [INFO][5019] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.187 [INFO][5019] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87 Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.195 [INFO][5019] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.201 [INFO][5019] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.6/26] block=192.168.74.0/26 handle="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.201 [INFO][5019] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.6/26] handle="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.201 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:18:53.226406 containerd[1757]: 2025-03-25 01:18:53.201 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.6/26] IPv6=[] ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" HandleID="k8s-pod-network.36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" Mar 25 01:18:53.227222 containerd[1757]: 2025-03-25 01:18:53.204 [INFO][5007] cni-plugin/k8s.go 386: Populated endpoint ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"40b8113e-8b14-47e0-bcc0-3d1d07612344", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"", Pod:"coredns-668d6bf9bc-k5kmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d93427f47d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:53.227222 containerd[1757]: 2025-03-25 01:18:53.204 [INFO][5007] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.6/32] ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" Mar 25 01:18:53.227222 containerd[1757]: 2025-03-25 01:18:53.204 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d93427f47d ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" Mar 25 01:18:53.227222 containerd[1757]: 2025-03-25 01:18:53.207 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" Mar 25 01:18:53.227222 containerd[1757]: 2025-03-25 01:18:53.208 [INFO][5007] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"40b8113e-8b14-47e0-bcc0-3d1d07612344", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87", Pod:"coredns-668d6bf9bc-k5kmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d93427f47d", MAC:"3e:c3:cb:9e:a3:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:18:53.227222 containerd[1757]: 2025-03-25 01:18:53.223 [INFO][5007] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kmp" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-coredns--668d6bf9bc--k5kmp-eth0" Mar 25 01:18:53.278805 containerd[1757]: time="2025-03-25T01:18:53.278760928Z" level=info msg="connecting to shim 36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87" address="unix:///run/containerd/s/23055b462e3f47c668fa2b2a6fcdfe4e35ab08d482f27e07c4dd97eb81cdecdc" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:53.306988 systemd[1]: Started cri-containerd-36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87.scope - libcontainer container 36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87. Mar 25 01:18:53.357413 containerd[1757]: time="2025-03-25T01:18:53.357220300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kmp,Uid:40b8113e-8b14-47e0-bcc0-3d1d07612344,Namespace:kube-system,Attempt:0,} returns sandbox id \"36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87\"" Mar 25 01:18:53.360112 containerd[1757]: time="2025-03-25T01:18:53.360038979Z" level=info msg="CreateContainer within sandbox \"36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:18:53.389512 containerd[1757]: time="2025-03-25T01:18:53.387645970Z" level=info msg="Container a34d512f6772ee2364cca491d05142be08db9af6e3bfb6beecb46276e8990cdd: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:53.411676 containerd[1757]: time="2025-03-25T01:18:53.411537201Z" level=info msg="CreateContainer within sandbox \"36d7a1027d53c71b7d66badaed7e07903f037883ddf024647fbdb1aaf10e5c87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a34d512f6772ee2364cca491d05142be08db9af6e3bfb6beecb46276e8990cdd\"" Mar 25 01:18:53.412569 containerd[1757]: time="2025-03-25T01:18:53.412531761Z" level=info msg="StartContainer for \"a34d512f6772ee2364cca491d05142be08db9af6e3bfb6beecb46276e8990cdd\"" Mar 25 01:18:53.414256 containerd[1757]: time="2025-03-25T01:18:53.414217360Z" level=info msg="connecting to shim a34d512f6772ee2364cca491d05142be08db9af6e3bfb6beecb46276e8990cdd" address="unix:///run/containerd/s/23055b462e3f47c668fa2b2a6fcdfe4e35ab08d482f27e07c4dd97eb81cdecdc" protocol=ttrpc version=3 Mar 25 01:18:53.435773 systemd[1]: Started cri-containerd-a34d512f6772ee2364cca491d05142be08db9af6e3bfb6beecb46276e8990cdd.scope - libcontainer container a34d512f6772ee2364cca491d05142be08db9af6e3bfb6beecb46276e8990cdd. Mar 25 01:18:53.467072 containerd[1757]: time="2025-03-25T01:18:53.467005142Z" level=info msg="StartContainer for \"a34d512f6772ee2364cca491d05142be08db9af6e3bfb6beecb46276e8990cdd\" returns successfully" Mar 25 01:18:54.361513 kubelet[3333]: I0325 01:18:54.361063 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k5kmp" podStartSLOduration=45.361047346 podStartE2EDuration="45.361047346s" podCreationTimestamp="2025-03-25 01:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:54.318233561 +0000 UTC m=+50.326563915" watchObservedRunningTime="2025-03-25 01:18:54.361047346 +0000 UTC m=+50.369377700" Mar 25 01:18:54.695252 containerd[1757]: time="2025-03-25T01:18:54.695099667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:54.697875 containerd[1757]: time="2025-03-25T01:18:54.697786706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 25 01:18:54.701966 containerd[1757]: time="2025-03-25T01:18:54.701883225Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:54.710521 containerd[1757]: time="2025-03-25T01:18:54.710426142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:54.711375 containerd[1757]: time="2025-03-25T01:18:54.710980662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 14.662784642s" Mar 25 01:18:54.711375 containerd[1757]: time="2025-03-25T01:18:54.711024422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 25 01:18:54.724505 containerd[1757]: time="2025-03-25T01:18:54.724464217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 25 01:18:54.752384 containerd[1757]: time="2025-03-25T01:18:54.752329407Z" level=info msg="CreateContainer within sandbox \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 25 01:18:54.787662 containerd[1757]: time="2025-03-25T01:18:54.786824715Z" level=info msg="Container 4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:54.792216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541178137.mount: Deactivated successfully. Mar 25 01:18:54.807787 containerd[1757]: time="2025-03-25T01:18:54.807747988Z" level=info msg="CreateContainer within sandbox \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\"" Mar 25 01:18:54.808618 containerd[1757]: time="2025-03-25T01:18:54.808537787Z" level=info msg="StartContainer for \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\"" Mar 25 01:18:54.810367 containerd[1757]: time="2025-03-25T01:18:54.810320747Z" level=info msg="connecting to shim 4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438" address="unix:///run/containerd/s/0389e3eeb0b404b928ad4a865ac4c2ba2c05ba3fa79d97a5d560924b5394ce2d" protocol=ttrpc version=3 Mar 25 01:18:54.829975 systemd[1]: Started cri-containerd-4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438.scope - libcontainer container 4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438. Mar 25 01:18:54.873889 containerd[1757]: time="2025-03-25T01:18:54.872664685Z" level=info msg="StartContainer for \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" returns successfully" Mar 25 01:18:54.880714 systemd-networkd[1452]: cali7d93427f47d: Gained IPv6LL Mar 25 01:18:55.336488 kubelet[3333]: I0325 01:18:55.335755 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f856dbd65-p9hgt" podStartSLOduration=23.663136842 podStartE2EDuration="38.335734641s" podCreationTimestamp="2025-03-25 01:18:17 +0000 UTC" firstStartedPulling="2025-03-25 01:18:40.04652582 +0000 UTC m=+36.054856174" lastFinishedPulling="2025-03-25 01:18:54.719123579 +0000 UTC m=+50.727453973" observedRunningTime="2025-03-25 01:18:55.335251561 +0000 UTC m=+51.343581915" watchObservedRunningTime="2025-03-25 01:18:55.335734641 +0000 UTC m=+51.344064995" Mar 25 01:18:55.355834 containerd[1757]: time="2025-03-25T01:18:55.355799234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" id:\"c3027f154b3388d00540ab12245d93ce5f96b1fc196e0edfaa96cb1a91f25e31\" pid:5170 exited_at:{seconds:1742865535 nanos:350037436}" Mar 25 01:18:56.381184 containerd[1757]: time="2025-03-25T01:18:56.381102748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:56.383647 containerd[1757]: time="2025-03-25T01:18:56.383472507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 25 01:18:56.394258 containerd[1757]: time="2025-03-25T01:18:56.394133463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.669319126s" Mar 25 01:18:56.394258 containerd[1757]: time="2025-03-25T01:18:56.394170943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 25 01:18:56.394258 containerd[1757]: time="2025-03-25T01:18:56.394202543Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:56.394986 containerd[1757]: time="2025-03-25T01:18:56.394795143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:56.399253 containerd[1757]: time="2025-03-25T01:18:56.398743102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 01:18:56.402881 containerd[1757]: time="2025-03-25T01:18:56.402837580Z" level=info msg="CreateContainer within sandbox \"41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 25 01:18:56.434847 containerd[1757]: time="2025-03-25T01:18:56.434784729Z" level=info msg="Container f4b45accfaa5453786e563170426c64fe3c53375c023988f2cd0456321b4f61d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:56.461472 containerd[1757]: time="2025-03-25T01:18:56.461428519Z" level=info msg="CreateContainer within sandbox \"41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f4b45accfaa5453786e563170426c64fe3c53375c023988f2cd0456321b4f61d\"" Mar 25 01:18:56.462133 containerd[1757]: time="2025-03-25T01:18:56.461990199Z" level=info msg="StartContainer for \"f4b45accfaa5453786e563170426c64fe3c53375c023988f2cd0456321b4f61d\"" Mar 25 01:18:56.469422 containerd[1757]: time="2025-03-25T01:18:56.469356316Z" level=info msg="connecting to shim f4b45accfaa5453786e563170426c64fe3c53375c023988f2cd0456321b4f61d" address="unix:///run/containerd/s/dcff3eb1fd5a8ae2bd010e46dff349990151cdcb005cdb34febbf776da4d49a8" protocol=ttrpc version=3 Mar 25 01:18:56.490761 systemd[1]: Started cri-containerd-f4b45accfaa5453786e563170426c64fe3c53375c023988f2cd0456321b4f61d.scope - libcontainer container f4b45accfaa5453786e563170426c64fe3c53375c023988f2cd0456321b4f61d. Mar 25 01:18:56.531366 containerd[1757]: time="2025-03-25T01:18:56.531235334Z" level=info msg="StartContainer for \"f4b45accfaa5453786e563170426c64fe3c53375c023988f2cd0456321b4f61d\" returns successfully" Mar 25 01:18:59.347827 containerd[1757]: time="2025-03-25T01:18:59.347777201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:59.350140 containerd[1757]: time="2025-03-25T01:18:59.350081840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 25 01:18:59.355645 containerd[1757]: time="2025-03-25T01:18:59.355568118Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:59.363759 containerd[1757]: time="2025-03-25T01:18:59.363701595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:59.364443 containerd[1757]: time="2025-03-25T01:18:59.363998595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 2.965213133s" Mar 25 01:18:59.364443 containerd[1757]: time="2025-03-25T01:18:59.364031755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 25 01:18:59.365912 containerd[1757]: time="2025-03-25T01:18:59.365871274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 01:18:59.366657 containerd[1757]: time="2025-03-25T01:18:59.366624154Z" level=info msg="CreateContainer within sandbox \"fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 01:18:59.398953 containerd[1757]: time="2025-03-25T01:18:59.398042903Z" level=info msg="Container 446d5a5486889dd57a2a4cb2fd45ad4f0157abce783cbaed052639830aa270ec: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:59.400189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941529244.mount: Deactivated successfully. Mar 25 01:18:59.419337 containerd[1757]: time="2025-03-25T01:18:59.419288695Z" level=info msg="CreateContainer within sandbox \"fd0d0b557d7227f5dd9bfe9d00602cba696a07b2155bb6af1ad65f75c822ab40\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"446d5a5486889dd57a2a4cb2fd45ad4f0157abce783cbaed052639830aa270ec\"" Mar 25 01:18:59.420140 containerd[1757]: time="2025-03-25T01:18:59.420094655Z" level=info msg="StartContainer for \"446d5a5486889dd57a2a4cb2fd45ad4f0157abce783cbaed052639830aa270ec\"" Mar 25 01:18:59.421723 containerd[1757]: time="2025-03-25T01:18:59.421591694Z" level=info msg="connecting to shim 446d5a5486889dd57a2a4cb2fd45ad4f0157abce783cbaed052639830aa270ec" address="unix:///run/containerd/s/1f11b9c117eb4c90d30768d6308d4a1532983f1baf11c25db117804f409bcb4a" protocol=ttrpc version=3 Mar 25 01:18:59.448838 systemd[1]: Started cri-containerd-446d5a5486889dd57a2a4cb2fd45ad4f0157abce783cbaed052639830aa270ec.scope - libcontainer container 446d5a5486889dd57a2a4cb2fd45ad4f0157abce783cbaed052639830aa270ec. Mar 25 01:18:59.486479 containerd[1757]: time="2025-03-25T01:18:59.486436791Z" level=info msg="StartContainer for \"446d5a5486889dd57a2a4cb2fd45ad4f0157abce783cbaed052639830aa270ec\" returns successfully" Mar 25 01:18:59.714125 containerd[1757]: time="2025-03-25T01:18:59.713293709Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:18:59.717054 containerd[1757]: time="2025-03-25T01:18:59.716994748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 25 01:18:59.719340 containerd[1757]: time="2025-03-25T01:18:59.719306427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 353.393193ms" Mar 25 01:18:59.719563 containerd[1757]: time="2025-03-25T01:18:59.719519667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 25 01:18:59.721694 containerd[1757]: time="2025-03-25T01:18:59.721586666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 25 01:18:59.724526 containerd[1757]: time="2025-03-25T01:18:59.724347345Z" level=info msg="CreateContainer within sandbox \"797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 01:18:59.769657 containerd[1757]: time="2025-03-25T01:18:59.768863409Z" level=info msg="Container 02ccf43ff19739d50957ec9f38f7f6f6a4d3cb9e70d8813fe4707a2a3f51e47f: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:59.774372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927978648.mount: Deactivated successfully. Mar 25 01:18:59.791385 containerd[1757]: time="2025-03-25T01:18:59.791199161Z" level=info msg="CreateContainer within sandbox \"797a4a64ba6baa94523dd40aa34621105316149b3e431baa35bb6a9ec31b1058\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"02ccf43ff19739d50957ec9f38f7f6f6a4d3cb9e70d8813fe4707a2a3f51e47f\"" Mar 25 01:18:59.793221 containerd[1757]: time="2025-03-25T01:18:59.793089121Z" level=info msg="StartContainer for \"02ccf43ff19739d50957ec9f38f7f6f6a4d3cb9e70d8813fe4707a2a3f51e47f\"" Mar 25 01:18:59.798123 containerd[1757]: time="2025-03-25T01:18:59.797562639Z" level=info msg="connecting to shim 02ccf43ff19739d50957ec9f38f7f6f6a4d3cb9e70d8813fe4707a2a3f51e47f" address="unix:///run/containerd/s/a5f5f49553df9eb6762a01ac6ba4dd967bf8cb87c7a3f0842696a1048835ee27" protocol=ttrpc version=3 Mar 25 01:18:59.823561 systemd[1]: Started cri-containerd-02ccf43ff19739d50957ec9f38f7f6f6a4d3cb9e70d8813fe4707a2a3f51e47f.scope - libcontainer container 02ccf43ff19739d50957ec9f38f7f6f6a4d3cb9e70d8813fe4707a2a3f51e47f. Mar 25 01:18:59.869881 containerd[1757]: time="2025-03-25T01:18:59.869721453Z" level=info msg="StartContainer for \"02ccf43ff19739d50957ec9f38f7f6f6a4d3cb9e70d8813fe4707a2a3f51e47f\" returns successfully" Mar 25 01:19:00.375255 kubelet[3333]: I0325 01:19:00.374928 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d9d7df7-rbgrp" podStartSLOduration=34.653653104 podStartE2EDuration="42.374906351s" podCreationTimestamp="2025-03-25 01:18:18 +0000 UTC" firstStartedPulling="2025-03-25 01:18:51.99975634 +0000 UTC m=+48.008086654" lastFinishedPulling="2025-03-25 01:18:59.721009347 +0000 UTC m=+55.729339901" observedRunningTime="2025-03-25 01:19:00.35009544 +0000 UTC m=+56.358425794" watchObservedRunningTime="2025-03-25 01:19:00.374906351 +0000 UTC m=+56.383236705" Mar 25 01:19:01.192192 containerd[1757]: time="2025-03-25T01:19:01.191328858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:19:01.194970 containerd[1757]: time="2025-03-25T01:19:01.194675736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 25 01:19:01.199629 containerd[1757]: time="2025-03-25T01:19:01.199541615Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:19:01.206295 containerd[1757]: time="2025-03-25T01:19:01.206231772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:19:01.206996 containerd[1757]: time="2025-03-25T01:19:01.206963252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.485327466s" Mar 25 01:19:01.207219 containerd[1757]: time="2025-03-25T01:19:01.206997372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 25 01:19:01.209617 containerd[1757]: time="2025-03-25T01:19:01.209376891Z" level=info msg="CreateContainer within sandbox \"41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 25 01:19:01.242733 containerd[1757]: time="2025-03-25T01:19:01.241587120Z" level=info msg="Container b6e21b8b114743ceb28dbb1a0efaf3cf2ec7bbe675f2c79077bdb8c1dc58846e: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:01.273134 containerd[1757]: time="2025-03-25T01:19:01.273081468Z" level=info msg="CreateContainer within sandbox \"41d6ed43f3b09ed055d2a47ac0e95b196e8e84cefa7994e194e82e7b13407729\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b6e21b8b114743ceb28dbb1a0efaf3cf2ec7bbe675f2c79077bdb8c1dc58846e\"" Mar 25 01:19:01.273899 containerd[1757]: time="2025-03-25T01:19:01.273875148Z" level=info msg="StartContainer for \"b6e21b8b114743ceb28dbb1a0efaf3cf2ec7bbe675f2c79077bdb8c1dc58846e\"" Mar 25 01:19:01.275859 containerd[1757]: time="2025-03-25T01:19:01.275822387Z" level=info msg="connecting to shim b6e21b8b114743ceb28dbb1a0efaf3cf2ec7bbe675f2c79077bdb8c1dc58846e" address="unix:///run/containerd/s/dcff3eb1fd5a8ae2bd010e46dff349990151cdcb005cdb34febbf776da4d49a8" protocol=ttrpc version=3 Mar 25 01:19:01.311809 systemd[1]: Started cri-containerd-b6e21b8b114743ceb28dbb1a0efaf3cf2ec7bbe675f2c79077bdb8c1dc58846e.scope - libcontainer container b6e21b8b114743ceb28dbb1a0efaf3cf2ec7bbe675f2c79077bdb8c1dc58846e. Mar 25 01:19:01.339071 kubelet[3333]: I0325 01:19:01.338719 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:19:01.375141 containerd[1757]: time="2025-03-25T01:19:01.375095432Z" level=info msg="StartContainer for \"b6e21b8b114743ceb28dbb1a0efaf3cf2ec7bbe675f2c79077bdb8c1dc58846e\" returns successfully" Mar 25 01:19:01.495771 kubelet[3333]: I0325 01:19:01.495417 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d9d7df7-t5jcx" podStartSLOduration=36.106295902 podStartE2EDuration="43.495387028s" podCreationTimestamp="2025-03-25 01:18:18 +0000 UTC" firstStartedPulling="2025-03-25 01:18:51.975529389 +0000 UTC m=+47.983859743" lastFinishedPulling="2025-03-25 01:18:59.364620515 +0000 UTC m=+55.372950869" observedRunningTime="2025-03-25 01:19:00.375234951 +0000 UTC m=+56.383565305" watchObservedRunningTime="2025-03-25 01:19:01.495387028 +0000 UTC m=+57.503717342" Mar 25 01:19:02.358573 kubelet[3333]: I0325 01:19:02.358505 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d6vfh" podStartSLOduration=36.057226799 podStartE2EDuration="45.358484838s" podCreationTimestamp="2025-03-25 01:18:17 +0000 UTC" firstStartedPulling="2025-03-25 01:18:51.906702573 +0000 UTC m=+47.915032927" lastFinishedPulling="2025-03-25 01:19:01.207960612 +0000 UTC m=+57.216290966" observedRunningTime="2025-03-25 01:19:02.358144238 +0000 UTC m=+58.366474592" watchObservedRunningTime="2025-03-25 01:19:02.358484838 +0000 UTC m=+58.366815232" Mar 25 01:19:02.390268 kubelet[3333]: I0325 01:19:02.390231 3333 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 25 01:19:02.394407 kubelet[3333]: I0325 01:19:02.394369 3333 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 25 01:19:09.332370 containerd[1757]: time="2025-03-25T01:19:09.332308829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" id:\"03659bdbc82b6ddcdef8544ef1e24c36f0b91af07d3f9cbf2bf851b0f343b61d\" pid:5348 exited_at:{seconds:1742865549 nanos:331666550}" Mar 25 01:19:13.641035 containerd[1757]: time="2025-03-25T01:19:13.640887888Z" level=info msg="StopContainer for \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" with timeout 300 (s)" Mar 25 01:19:13.651663 containerd[1757]: time="2025-03-25T01:19:13.651269964Z" level=info msg="Stop container \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" with signal terminated" Mar 25 01:19:13.807058 containerd[1757]: time="2025-03-25T01:19:13.807016268Z" level=info msg="StopContainer for \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" with timeout 30 (s)" Mar 25 01:19:13.807474 containerd[1757]: time="2025-03-25T01:19:13.807445308Z" level=info msg="Stop container \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" with signal terminated" Mar 25 01:19:13.834160 systemd[1]: cri-containerd-4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438.scope: Deactivated successfully. Mar 25 01:19:13.841826 containerd[1757]: time="2025-03-25T01:19:13.841778216Z" level=info msg="received exit event container_id:\"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" id:\"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" pid:5139 exit_status:2 exited_at:{seconds:1742865553 nanos:841302696}" Mar 25 01:19:13.842307 containerd[1757]: time="2025-03-25T01:19:13.841989576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" id:\"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" pid:5139 exit_status:2 exited_at:{seconds:1742865553 nanos:841302696}" Mar 25 01:19:13.866665 containerd[1757]: time="2025-03-25T01:19:13.866402287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" id:\"75bf51c09f2ec1b1fdc894cf7f68bc8e7009304e988935f8e41d520c779d3330\" pid:5385 exited_at:{seconds:1742865553 nanos:865854847}" Mar 25 01:19:13.870488 containerd[1757]: time="2025-03-25T01:19:13.870449206Z" level=info msg="StopContainer for \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" with timeout 5 (s)" Mar 25 01:19:13.871999 containerd[1757]: time="2025-03-25T01:19:13.871824245Z" level=info msg="Stop container \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" with signal terminated" Mar 25 01:19:13.878792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438-rootfs.mount: Deactivated successfully. Mar 25 01:19:13.912957 systemd[1]: cri-containerd-75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a.scope: Deactivated successfully. Mar 25 01:19:13.914468 systemd[1]: cri-containerd-75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a.scope: Consumed 1.493s CPU time, 157.5M memory peak, 4K read from disk, 624K written to disk. Mar 25 01:19:13.916254 containerd[1757]: time="2025-03-25T01:19:13.916091229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" pid:4252 exited_at:{seconds:1742865553 nanos:915398630}" Mar 25 01:19:13.921967 containerd[1757]: time="2025-03-25T01:19:13.921899587Z" level=info msg="received exit event container_id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" id:\"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" pid:4252 exited_at:{seconds:1742865553 nanos:915398630}" Mar 25 01:19:13.947035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a-rootfs.mount: Deactivated successfully. Mar 25 01:19:14.575232 containerd[1757]: time="2025-03-25T01:19:14.575185073Z" level=info msg="StopContainer for \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" returns successfully" Mar 25 01:19:14.577213 containerd[1757]: time="2025-03-25T01:19:14.577076313Z" level=info msg="StopPodSandbox for \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\"" Mar 25 01:19:14.577213 containerd[1757]: time="2025-03-25T01:19:14.577151713Z" level=info msg="Container to stop \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:19:14.579707 containerd[1757]: time="2025-03-25T01:19:14.579506352Z" level=info msg="StopContainer for \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" returns successfully" Mar 25 01:19:14.580699 containerd[1757]: time="2025-03-25T01:19:14.580515551Z" level=info msg="StopPodSandbox for \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\"" Mar 25 01:19:14.580699 containerd[1757]: time="2025-03-25T01:19:14.580668671Z" level=info msg="Container to stop \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:19:14.580945 containerd[1757]: time="2025-03-25T01:19:14.580914511Z" level=info msg="Container to stop \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:19:14.581112 containerd[1757]: time="2025-03-25T01:19:14.581038471Z" level=info msg="Container to stop \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:19:14.588986 systemd[1]: cri-containerd-230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2.scope: Deactivated successfully. Mar 25 01:19:14.593299 containerd[1757]: time="2025-03-25T01:19:14.593195867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" id:\"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" pid:4513 exit_status:137 exited_at:{seconds:1742865554 nanos:591263108}" Mar 25 01:19:14.597777 systemd[1]: cri-containerd-7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb.scope: Deactivated successfully. Mar 25 01:19:14.642618 containerd[1757]: time="2025-03-25T01:19:14.642272089Z" level=info msg="shim disconnected" id=230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2 namespace=k8s.io Mar 25 01:19:14.642618 containerd[1757]: time="2025-03-25T01:19:14.642412929Z" level=warning msg="cleaning up after shim disconnected" id=230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2 namespace=k8s.io Mar 25 01:19:14.642618 containerd[1757]: time="2025-03-25T01:19:14.642445089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:19:14.647582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2-rootfs.mount: Deactivated successfully. Mar 25 01:19:14.653386 containerd[1757]: time="2025-03-25T01:19:14.652866725Z" level=info msg="shim disconnected" id=7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb namespace=k8s.io Mar 25 01:19:14.653386 containerd[1757]: time="2025-03-25T01:19:14.653429085Z" level=warning msg="cleaning up after shim disconnected" id=7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb namespace=k8s.io Mar 25 01:19:14.653386 containerd[1757]: time="2025-03-25T01:19:14.653459405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:19:14.654915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb-rootfs.mount: Deactivated successfully. Mar 25 01:19:14.688658 containerd[1757]: time="2025-03-25T01:19:14.688583033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" id:\"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" pid:3847 exit_status:137 exited_at:{seconds:1742865554 nanos:600958864}" Mar 25 01:19:14.692500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2-shm.mount: Deactivated successfully. Mar 25 01:19:14.696796 containerd[1757]: time="2025-03-25T01:19:14.696217550Z" level=info msg="received exit event sandbox_id:\"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" exit_status:137 exited_at:{seconds:1742865554 nanos:600958864}" Mar 25 01:19:14.700503 containerd[1757]: time="2025-03-25T01:19:14.700463828Z" level=info msg="TearDown network for sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" successfully" Mar 25 01:19:14.700792 containerd[1757]: time="2025-03-25T01:19:14.700773388Z" level=info msg="StopPodSandbox for \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" returns successfully" Mar 25 01:19:14.702218 containerd[1757]: time="2025-03-25T01:19:14.701408628Z" level=info msg="received exit event sandbox_id:\"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" exit_status:137 exited_at:{seconds:1742865554 nanos:591263108}" Mar 25 01:19:14.780631 kubelet[3333]: I0325 01:19:14.780487 3333 memory_manager.go:355] "RemoveStaleState removing state" podUID="c290e755-1595-4bfc-bdcc-5fa900680b95" containerName="calico-node" Mar 25 01:19:14.794700 systemd[1]: Created slice kubepods-besteffort-pod561a8645_23c9_4062_a095_ef15fb2802a6.slice - libcontainer container kubepods-besteffort-pod561a8645_23c9_4062_a095_ef15fb2802a6.slice. Mar 25 01:19:14.822878 systemd-networkd[1452]: calidf092883c6f: Link DOWN Mar 25 01:19:14.822890 systemd-networkd[1452]: calidf092883c6f: Lost carrier Mar 25 01:19:14.835574 systemd[1]: cri-containerd-5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab.scope: Deactivated successfully. Mar 25 01:19:14.845314 containerd[1757]: time="2025-03-25T01:19:14.845086457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" id:\"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" pid:3879 exit_status:1 exited_at:{seconds:1742865554 nanos:844235577}" Mar 25 01:19:14.846215 containerd[1757]: time="2025-03-25T01:19:14.845884696Z" level=info msg="received exit event container_id:\"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" id:\"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" pid:3879 exit_status:1 exited_at:{seconds:1742865554 nanos:844235577}" Mar 25 01:19:14.851549 kubelet[3333]: I0325 01:19:14.851410 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-policysync\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.851549 kubelet[3333]: I0325 01:19:14.851442 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-lib-calico\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.851549 kubelet[3333]: I0325 01:19:14.851513 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c290e755-1595-4bfc-bdcc-5fa900680b95-node-certs\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.852208 kubelet[3333]: I0325 01:19:14.851672 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-log-dir\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.852208 kubelet[3333]: I0325 01:19:14.851701 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-net-dir\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.852208 kubelet[3333]: I0325 01:19:14.851719 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-xtables-lock\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.852208 kubelet[3333]: I0325 01:19:14.852089 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-run-calico\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.852208 kubelet[3333]: I0325 01:19:14.852123 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-bin-dir\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.854799 kubelet[3333]: I0325 01:19:14.852492 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-flexvol-driver-host\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.855070 kubelet[3333]: I0325 01:19:14.854887 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-policysync" (OuterVolumeSpecName: "policysync") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.855070 kubelet[3333]: I0325 01:19:14.854954 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.855070 kubelet[3333]: I0325 01:19:14.854975 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.855070 kubelet[3333]: I0325 01:19:14.854991 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.855070 kubelet[3333]: I0325 01:19:14.855007 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.855234 kubelet[3333]: I0325 01:19:14.855022 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.855234 kubelet[3333]: I0325 01:19:14.855036 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.855234 kubelet[3333]: I0325 01:19:14.855055 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.856142 kubelet[3333]: I0325 01:19:14.852532 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c290e755-1595-4bfc-bdcc-5fa900680b95-tigera-ca-bundle\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.856142 kubelet[3333]: I0325 01:19:14.855507 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbplc\" (UniqueName: \"kubernetes.io/projected/c290e755-1595-4bfc-bdcc-5fa900680b95-kube-api-access-gbplc\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.856142 kubelet[3333]: I0325 01:19:14.855524 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-lib-modules\") pod \"c290e755-1595-4bfc-bdcc-5fa900680b95\" (UID: \"c290e755-1595-4bfc-bdcc-5fa900680b95\") " Mar 25 01:19:14.856142 kubelet[3333]: I0325 01:19:14.855590 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-var-lib-calico\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856142 kubelet[3333]: I0325 01:19:14.855633 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxjt2\" (UniqueName: \"kubernetes.io/projected/561a8645-23c9-4062-a095-ef15fb2802a6-kube-api-access-sxjt2\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856142 kubelet[3333]: I0325 01:19:14.855653 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-lib-modules\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856376 kubelet[3333]: I0325 01:19:14.855667 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-policysync\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856376 kubelet[3333]: I0325 01:19:14.855692 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-cni-bin-dir\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856376 kubelet[3333]: I0325 01:19:14.855714 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561a8645-23c9-4062-a095-ef15fb2802a6-tigera-ca-bundle\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856376 kubelet[3333]: I0325 01:19:14.855734 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-flexvol-driver-host\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856376 kubelet[3333]: I0325 01:19:14.855749 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-cni-net-dir\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856484 kubelet[3333]: I0325 01:19:14.855765 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-xtables-lock\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856484 kubelet[3333]: I0325 01:19:14.855781 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/561a8645-23c9-4062-a095-ef15fb2802a6-node-certs\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856484 kubelet[3333]: I0325 01:19:14.855800 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-cni-log-dir\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856484 kubelet[3333]: I0325 01:19:14.855819 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/561a8645-23c9-4062-a095-ef15fb2802a6-var-run-calico\") pod \"calico-node-lpv54\" (UID: \"561a8645-23c9-4062-a095-ef15fb2802a6\") " pod="calico-system/calico-node-lpv54" Mar 25 01:19:14.856484 kubelet[3333]: I0325 01:19:14.855856 3333 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-xtables-lock\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.856484 kubelet[3333]: I0325 01:19:14.855867 3333 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-run-calico\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.856941 kubelet[3333]: I0325 01:19:14.855876 3333 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-bin-dir\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.856941 kubelet[3333]: I0325 01:19:14.855885 3333 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-flexvol-driver-host\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.856941 kubelet[3333]: I0325 01:19:14.855893 3333 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-policysync\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.856941 kubelet[3333]: I0325 01:19:14.855901 3333 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-var-lib-calico\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.856941 kubelet[3333]: I0325 01:19:14.855909 3333 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-log-dir\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.856941 kubelet[3333]: I0325 01:19:14.855918 3333 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-cni-net-dir\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.857804 kubelet[3333]: I0325 01:19:14.857723 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:19:14.863214 kubelet[3333]: I0325 01:19:14.863149 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c290e755-1595-4bfc-bdcc-5fa900680b95-kube-api-access-gbplc" (OuterVolumeSpecName: "kube-api-access-gbplc") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "kube-api-access-gbplc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:19:14.865251 kubelet[3333]: I0325 01:19:14.864819 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c290e755-1595-4bfc-bdcc-5fa900680b95-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 25 01:19:14.869026 kubelet[3333]: I0325 01:19:14.868992 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c290e755-1595-4bfc-bdcc-5fa900680b95-node-certs" (OuterVolumeSpecName: "node-certs") pod "c290e755-1595-4bfc-bdcc-5fa900680b95" (UID: "c290e755-1595-4bfc-bdcc-5fa900680b95"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 25 01:19:14.880148 systemd[1]: var-lib-kubelet-pods-c290e755\x2d1595\x2d4bfc\x2dbdcc\x2d5fa900680b95-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Mar 25 01:19:14.880273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb-shm.mount: Deactivated successfully. Mar 25 01:19:14.880325 systemd[1]: var-lib-kubelet-pods-c290e755\x2d1595\x2d4bfc\x2dbdcc\x2d5fa900680b95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgbplc.mount: Deactivated successfully. Mar 25 01:19:14.880373 systemd[1]: var-lib-kubelet-pods-c290e755\x2d1595\x2d4bfc\x2dbdcc\x2d5fa900680b95-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 25 01:19:14.892352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab-rootfs.mount: Deactivated successfully. Mar 25 01:19:14.918463 containerd[1757]: time="2025-03-25T01:19:14.918342190Z" level=info msg="StopContainer for \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" returns successfully" Mar 25 01:19:14.919535 containerd[1757]: time="2025-03-25T01:19:14.919438070Z" level=info msg="StopPodSandbox for \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\"" Mar 25 01:19:14.919535 containerd[1757]: time="2025-03-25T01:19:14.919513670Z" level=info msg="Container to stop \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:19:14.930989 systemd[1]: cri-containerd-e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc.scope: Deactivated successfully. Mar 25 01:19:14.935889 containerd[1757]: time="2025-03-25T01:19:14.935696144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" id:\"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" pid:3744 exit_status:137 exited_at:{seconds:1742865554 nanos:934372825}" Mar 25 01:19:14.959781 kubelet[3333]: I0325 01:19:14.958912 3333 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gbplc\" (UniqueName: \"kubernetes.io/projected/c290e755-1595-4bfc-bdcc-5fa900680b95-kube-api-access-gbplc\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.959781 kubelet[3333]: I0325 01:19:14.958945 3333 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c290e755-1595-4bfc-bdcc-5fa900680b95-lib-modules\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.959781 kubelet[3333]: I0325 01:19:14.958956 3333 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c290e755-1595-4bfc-bdcc-5fa900680b95-node-certs\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.959781 kubelet[3333]: I0325 01:19:14.958969 3333 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c290e755-1595-4bfc-bdcc-5fa900680b95-tigera-ca-bundle\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.818 [INFO][5522] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.818 [INFO][5522] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" iface="eth0" netns="/var/run/netns/cni-126d4018-31dd-19d8-22b4-21bff4d14c53" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.819 [INFO][5522] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" iface="eth0" netns="/var/run/netns/cni-126d4018-31dd-19d8-22b4-21bff4d14c53" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.834 [INFO][5522] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" after=16.339434ms iface="eth0" netns="/var/run/netns/cni-126d4018-31dd-19d8-22b4-21bff4d14c53" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.834 [INFO][5522] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.834 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.883 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.884 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.884 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.961 [INFO][5531] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.961 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.963 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:19:14.980761 containerd[1757]: 2025-03-25 01:19:14.977 [INFO][5522] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:19:14.981949 containerd[1757]: time="2025-03-25T01:19:14.981775928Z" level=info msg="TearDown network for sandbox \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" successfully" Mar 25 01:19:14.981949 containerd[1757]: time="2025-03-25T01:19:14.981803848Z" level=info msg="StopPodSandbox for \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" returns successfully" Mar 25 01:19:14.987015 systemd[1]: run-netns-cni\x2d126d4018\x2d31dd\x2d19d8\x2d22b4\x2d21bff4d14c53.mount: Deactivated successfully. Mar 25 01:19:15.013526 containerd[1757]: time="2025-03-25T01:19:15.013459396Z" level=info msg="shim disconnected" id=e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc namespace=k8s.io Mar 25 01:19:15.013526 containerd[1757]: time="2025-03-25T01:19:15.013508756Z" level=warning msg="cleaning up after shim disconnected" id=e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc namespace=k8s.io Mar 25 01:19:15.014979 containerd[1757]: time="2025-03-25T01:19:15.013540716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:19:15.017097 containerd[1757]: time="2025-03-25T01:19:15.016918155Z" level=info msg="received exit event sandbox_id:\"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" exit_status:137 exited_at:{seconds:1742865554 nanos:934372825}" Mar 25 01:19:15.020495 containerd[1757]: time="2025-03-25T01:19:15.020455274Z" level=info msg="TearDown network for sandbox \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" successfully" Mar 25 01:19:15.020657 containerd[1757]: time="2025-03-25T01:19:15.020640074Z" level=info msg="StopPodSandbox for \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" returns successfully" Mar 25 01:19:15.099702 containerd[1757]: time="2025-03-25T01:19:15.098826246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpv54,Uid:561a8645-23c9-4062-a095-ef15fb2802a6,Namespace:calico-system,Attempt:0,}" Mar 25 01:19:15.160582 kubelet[3333]: I0325 01:19:15.160090 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b295f55c-ee34-4b73-b398-00c3b94a537c-tigera-ca-bundle\") pod \"b295f55c-ee34-4b73-b398-00c3b94a537c\" (UID: \"b295f55c-ee34-4b73-b398-00c3b94a537c\") " Mar 25 01:19:15.160582 kubelet[3333]: I0325 01:19:15.160147 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-tigera-ca-bundle\") pod \"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3\" (UID: \"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3\") " Mar 25 01:19:15.160582 kubelet[3333]: I0325 01:19:15.160175 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b295f55c-ee34-4b73-b398-00c3b94a537c-typha-certs\") pod \"b295f55c-ee34-4b73-b398-00c3b94a537c\" (UID: \"b295f55c-ee34-4b73-b398-00c3b94a537c\") " Mar 25 01:19:15.160582 kubelet[3333]: I0325 01:19:15.160195 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5kgb\" (UniqueName: \"kubernetes.io/projected/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-kube-api-access-h5kgb\") pod \"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3\" (UID: \"6fe0d135-10c8-485b-b95f-2f53eb1fd5f3\") " Mar 25 01:19:15.160582 kubelet[3333]: I0325 01:19:15.160219 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lml4x\" (UniqueName: \"kubernetes.io/projected/b295f55c-ee34-4b73-b398-00c3b94a537c-kube-api-access-lml4x\") pod \"b295f55c-ee34-4b73-b398-00c3b94a537c\" (UID: \"b295f55c-ee34-4b73-b398-00c3b94a537c\") " Mar 25 01:19:15.162308 containerd[1757]: time="2025-03-25T01:19:15.162263303Z" level=info msg="connecting to shim af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668" address="unix:///run/containerd/s/b6c7eba8a036f74fb52ab6a76291079f9df7942dfbcfdfae3ca6f1c61765c401" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:19:15.167465 kubelet[3333]: I0325 01:19:15.166809 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6fe0d135-10c8-485b-b95f-2f53eb1fd5f3" (UID: "6fe0d135-10c8-485b-b95f-2f53eb1fd5f3"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 25 01:19:15.167465 kubelet[3333]: I0325 01:19:15.167707 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b295f55c-ee34-4b73-b398-00c3b94a537c-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "b295f55c-ee34-4b73-b398-00c3b94a537c" (UID: "b295f55c-ee34-4b73-b398-00c3b94a537c"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 25 01:19:15.168183 kubelet[3333]: I0325 01:19:15.168160 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b295f55c-ee34-4b73-b398-00c3b94a537c-kube-api-access-lml4x" (OuterVolumeSpecName: "kube-api-access-lml4x") pod "b295f55c-ee34-4b73-b398-00c3b94a537c" (UID: "b295f55c-ee34-4b73-b398-00c3b94a537c"). InnerVolumeSpecName "kube-api-access-lml4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:19:15.169724 kubelet[3333]: I0325 01:19:15.169701 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-kube-api-access-h5kgb" (OuterVolumeSpecName: "kube-api-access-h5kgb") pod "6fe0d135-10c8-485b-b95f-2f53eb1fd5f3" (UID: "6fe0d135-10c8-485b-b95f-2f53eb1fd5f3"). InnerVolumeSpecName "kube-api-access-h5kgb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:19:15.172081 kubelet[3333]: I0325 01:19:15.172040 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b295f55c-ee34-4b73-b398-00c3b94a537c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b295f55c-ee34-4b73-b398-00c3b94a537c" (UID: "b295f55c-ee34-4b73-b398-00c3b94a537c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 25 01:19:15.185775 systemd[1]: Started cri-containerd-af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668.scope - libcontainer container af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668. Mar 25 01:19:15.233574 containerd[1757]: time="2025-03-25T01:19:15.233531838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpv54,Uid:561a8645-23c9-4062-a095-ef15fb2802a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668\"" Mar 25 01:19:15.236514 containerd[1757]: time="2025-03-25T01:19:15.236421077Z" level=info msg="CreateContainer within sandbox \"af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 01:19:15.241392 kubelet[3333]: I0325 01:19:15.240879 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:19:15.260720 kubelet[3333]: I0325 01:19:15.260691 3333 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-tigera-ca-bundle\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:15.260932 kubelet[3333]: I0325 01:19:15.260881 3333 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b295f55c-ee34-4b73-b398-00c3b94a537c-tigera-ca-bundle\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:15.260932 kubelet[3333]: I0325 01:19:15.260897 3333 reconciler_common.go:299] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b295f55c-ee34-4b73-b398-00c3b94a537c-typha-certs\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:15.260932 kubelet[3333]: I0325 01:19:15.260908 3333 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h5kgb\" (UniqueName: \"kubernetes.io/projected/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3-kube-api-access-h5kgb\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:15.260932 kubelet[3333]: I0325 01:19:15.260917 3333 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lml4x\" (UniqueName: \"kubernetes.io/projected/b295f55c-ee34-4b73-b398-00c3b94a537c-kube-api-access-lml4x\") on node \"ci-4284.0.0-a-f1ebfb6c0b\" DevicePath \"\"" Mar 25 01:19:15.268666 containerd[1757]: time="2025-03-25T01:19:15.268624025Z" level=info msg="Container 08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:15.315447 containerd[1757]: time="2025-03-25T01:19:15.315379448Z" level=info msg="CreateContainer within sandbox \"af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88\"" Mar 25 01:19:15.316747 containerd[1757]: time="2025-03-25T01:19:15.316226608Z" level=info msg="StartContainer for \"08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88\"" Mar 25 01:19:15.318237 containerd[1757]: time="2025-03-25T01:19:15.318018607Z" level=info msg="connecting to shim 08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88" address="unix:///run/containerd/s/b6c7eba8a036f74fb52ab6a76291079f9df7942dfbcfdfae3ca6f1c61765c401" protocol=ttrpc version=3 Mar 25 01:19:15.347700 systemd[1]: Started cri-containerd-08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88.scope - libcontainer container 08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88. Mar 25 01:19:15.381284 kubelet[3333]: I0325 01:19:15.381021 3333 scope.go:117] "RemoveContainer" containerID="75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a" Mar 25 01:19:15.388073 containerd[1757]: time="2025-03-25T01:19:15.388040542Z" level=info msg="RemoveContainer for \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\"" Mar 25 01:19:15.398044 systemd[1]: Removed slice kubepods-besteffort-podc290e755_1595_4bfc_bdcc_5fa900680b95.slice - libcontainer container kubepods-besteffort-podc290e755_1595_4bfc_bdcc_5fa900680b95.slice. Mar 25 01:19:15.398279 systemd[1]: kubepods-besteffort-podc290e755_1595_4bfc_bdcc_5fa900680b95.slice: Consumed 1.903s CPU time, 286.5M memory peak, 4K read from disk, 157.1M written to disk. Mar 25 01:19:15.411786 containerd[1757]: time="2025-03-25T01:19:15.410483814Z" level=info msg="RemoveContainer for \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" returns successfully" Mar 25 01:19:15.411786 containerd[1757]: time="2025-03-25T01:19:15.411379654Z" level=info msg="StartContainer for \"08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88\" returns successfully" Mar 25 01:19:15.414455 kubelet[3333]: I0325 01:19:15.414423 3333 scope.go:117] "RemoveContainer" containerID="170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614" Mar 25 01:19:15.420761 systemd[1]: Removed slice kubepods-besteffort-podb295f55c_ee34_4b73_b398_00c3b94a537c.slice - libcontainer container kubepods-besteffort-podb295f55c_ee34_4b73_b398_00c3b94a537c.slice. Mar 25 01:19:15.425586 systemd[1]: Removed slice kubepods-besteffort-pod6fe0d135_10c8_485b_b95f_2f53eb1fd5f3.slice - libcontainer container kubepods-besteffort-pod6fe0d135_10c8_485b_b95f_2f53eb1fd5f3.slice. Mar 25 01:19:15.432335 containerd[1757]: time="2025-03-25T01:19:15.429590207Z" level=info msg="RemoveContainer for \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\"" Mar 25 01:19:15.445515 containerd[1757]: time="2025-03-25T01:19:15.445402802Z" level=info msg="RemoveContainer for \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" returns successfully" Mar 25 01:19:15.447689 kubelet[3333]: I0325 01:19:15.447275 3333 scope.go:117] "RemoveContainer" containerID="45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3" Mar 25 01:19:15.457354 containerd[1757]: time="2025-03-25T01:19:15.457303677Z" level=info msg="RemoveContainer for \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\"" Mar 25 01:19:15.467356 containerd[1757]: time="2025-03-25T01:19:15.467313594Z" level=info msg="RemoveContainer for \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" returns successfully" Mar 25 01:19:15.468016 kubelet[3333]: I0325 01:19:15.467894 3333 scope.go:117] "RemoveContainer" containerID="75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a" Mar 25 01:19:15.469087 containerd[1757]: time="2025-03-25T01:19:15.468997273Z" level=error msg="ContainerStatus for \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\": not found" Mar 25 01:19:15.469246 kubelet[3333]: E0325 01:19:15.469209 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\": not found" containerID="75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a" Mar 25 01:19:15.469439 kubelet[3333]: I0325 01:19:15.469250 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a"} err="failed to get container status \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"75d5eaca95570629b3e8a46f8a8e245db501832c00567a5b7702b2dc1dcaab6a\": not found" Mar 25 01:19:15.469489 kubelet[3333]: I0325 01:19:15.469450 3333 scope.go:117] "RemoveContainer" containerID="170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614" Mar 25 01:19:15.470490 containerd[1757]: time="2025-03-25T01:19:15.470442513Z" level=error msg="ContainerStatus for \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\": not found" Mar 25 01:19:15.471129 kubelet[3333]: E0325 01:19:15.470821 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\": not found" containerID="170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614" Mar 25 01:19:15.471129 kubelet[3333]: I0325 01:19:15.470853 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614"} err="failed to get container status \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\": rpc error: code = NotFound desc = an error occurred when try to find container \"170b91d03fb43817e727cf27bc54e927f3bfd56b9ddc30afcd9bccc6067d4614\": not found" Mar 25 01:19:15.471129 kubelet[3333]: I0325 01:19:15.470872 3333 scope.go:117] "RemoveContainer" containerID="45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3" Mar 25 01:19:15.471276 containerd[1757]: time="2025-03-25T01:19:15.471074352Z" level=error msg="ContainerStatus for \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\": not found" Mar 25 01:19:15.472825 kubelet[3333]: E0325 01:19:15.471415 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\": not found" containerID="45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3" Mar 25 01:19:15.472825 kubelet[3333]: I0325 01:19:15.471650 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3"} err="failed to get container status \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\": rpc error: code = NotFound desc = an error occurred when try to find container \"45c8a44c5af1947f4707575baae1dab300f0cc8231301c7a44bd501202dd6fe3\": not found" Mar 25 01:19:15.472825 kubelet[3333]: I0325 01:19:15.471668 3333 scope.go:117] "RemoveContainer" containerID="5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab" Mar 25 01:19:15.476620 containerd[1757]: time="2025-03-25T01:19:15.476323591Z" level=info msg="RemoveContainer for \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\"" Mar 25 01:19:15.483622 systemd[1]: cri-containerd-08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88.scope: Deactivated successfully. Mar 25 01:19:15.484119 systemd[1]: cri-containerd-08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88.scope: Consumed 33ms CPU time, 7.6M memory peak, 6.2M written to disk. Mar 25 01:19:15.490296 containerd[1757]: time="2025-03-25T01:19:15.490155666Z" level=info msg="received exit event container_id:\"08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88\" id:\"08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88\" pid:5658 exited_at:{seconds:1742865555 nanos:487285787}" Mar 25 01:19:15.492317 containerd[1757]: time="2025-03-25T01:19:15.492289705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88\" id:\"08ec736484fef339a00131b46ccc516750881d1d9b5db6b11d635acce0d34a88\" pid:5658 exited_at:{seconds:1742865555 nanos:487285787}" Mar 25 01:19:15.494635 containerd[1757]: time="2025-03-25T01:19:15.493773464Z" level=info msg="RemoveContainer for \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" returns successfully" Mar 25 01:19:15.495139 kubelet[3333]: I0325 01:19:15.495029 3333 scope.go:117] "RemoveContainer" containerID="5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab" Mar 25 01:19:15.496164 containerd[1757]: time="2025-03-25T01:19:15.496135624Z" level=error msg="ContainerStatus for \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\": not found" Mar 25 01:19:15.496929 kubelet[3333]: E0325 01:19:15.496820 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\": not found" containerID="5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab" Mar 25 01:19:15.497304 kubelet[3333]: I0325 01:19:15.497274 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab"} err="failed to get container status \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a37796ff68772fcfcdec93e006734a6ef7d307e590d4bde350aeb81c26346ab\": not found" Mar 25 01:19:15.497799 kubelet[3333]: I0325 01:19:15.497732 3333 scope.go:117] "RemoveContainer" containerID="4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438" Mar 25 01:19:15.511368 containerd[1757]: time="2025-03-25T01:19:15.511234538Z" level=info msg="RemoveContainer for \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\"" Mar 25 01:19:15.526696 containerd[1757]: time="2025-03-25T01:19:15.526564213Z" level=info msg="RemoveContainer for \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" returns successfully" Mar 25 01:19:15.528663 kubelet[3333]: I0325 01:19:15.527357 3333 scope.go:117] "RemoveContainer" containerID="4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438" Mar 25 01:19:15.528797 containerd[1757]: time="2025-03-25T01:19:15.527686972Z" level=error msg="ContainerStatus for \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\": not found" Mar 25 01:19:15.529096 kubelet[3333]: E0325 01:19:15.529030 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\": not found" containerID="4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438" Mar 25 01:19:15.529096 kubelet[3333]: I0325 01:19:15.529061 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438"} err="failed to get container status \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ea000afee09dfbf94474e21fe13dc2ea58aa6daffa5f442844936a2c19d2438\": not found" Mar 25 01:19:15.879416 systemd[1]: var-lib-kubelet-pods-6fe0d135\x2d10c8\x2d485b\x2db95f\x2d2f53eb1fd5f3-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Mar 25 01:19:15.879891 systemd[1]: var-lib-kubelet-pods-6fe0d135\x2d10c8\x2d485b\x2db95f\x2d2f53eb1fd5f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5kgb.mount: Deactivated successfully. Mar 25 01:19:15.879950 systemd[1]: var-lib-kubelet-pods-b295f55c\x2dee34\x2d4b73\x2db398\x2d00c3b94a537c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Mar 25 01:19:15.880002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc-rootfs.mount: Deactivated successfully. Mar 25 01:19:15.880050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc-shm.mount: Deactivated successfully. Mar 25 01:19:15.880100 systemd[1]: var-lib-kubelet-pods-b295f55c\x2dee34\x2d4b73\x2db398\x2d00c3b94a537c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlml4x.mount: Deactivated successfully. Mar 25 01:19:15.880149 systemd[1]: var-lib-kubelet-pods-b295f55c\x2dee34\x2d4b73\x2db398\x2d00c3b94a537c-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Mar 25 01:19:16.094500 kubelet[3333]: I0325 01:19:16.094420 3333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe0d135-10c8-485b-b95f-2f53eb1fd5f3" path="/var/lib/kubelet/pods/6fe0d135-10c8-485b-b95f-2f53eb1fd5f3/volumes" Mar 25 01:19:16.095633 kubelet[3333]: I0325 01:19:16.095364 3333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b295f55c-ee34-4b73-b398-00c3b94a537c" path="/var/lib/kubelet/pods/b295f55c-ee34-4b73-b398-00c3b94a537c/volumes" Mar 25 01:19:16.095829 kubelet[3333]: I0325 01:19:16.095808 3333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c290e755-1595-4bfc-bdcc-5fa900680b95" path="/var/lib/kubelet/pods/c290e755-1595-4bfc-bdcc-5fa900680b95/volumes" Mar 25 01:19:16.421113 containerd[1757]: time="2025-03-25T01:19:16.420910492Z" level=info msg="CreateContainer within sandbox \"af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 01:19:16.477717 containerd[1757]: time="2025-03-25T01:19:16.475801913Z" level=info msg="Container e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:16.500229 containerd[1757]: time="2025-03-25T01:19:16.500186544Z" level=info msg="CreateContainer within sandbox \"af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11\"" Mar 25 01:19:16.501244 containerd[1757]: time="2025-03-25T01:19:16.501204824Z" level=info msg="StartContainer for \"e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11\"" Mar 25 01:19:16.502922 containerd[1757]: time="2025-03-25T01:19:16.502879983Z" level=info msg="connecting to shim e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11" address="unix:///run/containerd/s/b6c7eba8a036f74fb52ab6a76291079f9df7942dfbcfdfae3ca6f1c61765c401" protocol=ttrpc version=3 Mar 25 01:19:16.528810 systemd[1]: Started cri-containerd-e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11.scope - libcontainer container e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11. Mar 25 01:19:16.571701 containerd[1757]: time="2025-03-25T01:19:16.571651958Z" level=info msg="StartContainer for \"e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11\" returns successfully" Mar 25 01:19:16.815860 kubelet[3333]: I0325 01:19:16.815714 3333 memory_manager.go:355] "RemoveStaleState removing state" podUID="b295f55c-ee34-4b73-b398-00c3b94a537c" containerName="calico-typha" Mar 25 01:19:16.816708 kubelet[3333]: I0325 01:19:16.816686 3333 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fe0d135-10c8-485b-b95f-2f53eb1fd5f3" containerName="calico-kube-controllers" Mar 25 01:19:16.837413 systemd[1]: Created slice kubepods-besteffort-pod1b615df6_c548_4d51_a47d_f0c70a2c7e42.slice - libcontainer container kubepods-besteffort-pod1b615df6_c548_4d51_a47d_f0c70a2c7e42.slice. Mar 25 01:19:16.972557 kubelet[3333]: I0325 01:19:16.972509 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b615df6-c548-4d51-a47d-f0c70a2c7e42-tigera-ca-bundle\") pod \"calico-typha-7968c68487-4lhhh\" (UID: \"1b615df6-c548-4d51-a47d-f0c70a2c7e42\") " pod="calico-system/calico-typha-7968c68487-4lhhh" Mar 25 01:19:16.972557 kubelet[3333]: I0325 01:19:16.972553 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1b615df6-c548-4d51-a47d-f0c70a2c7e42-typha-certs\") pod \"calico-typha-7968c68487-4lhhh\" (UID: \"1b615df6-c548-4d51-a47d-f0c70a2c7e42\") " pod="calico-system/calico-typha-7968c68487-4lhhh" Mar 25 01:19:16.972754 kubelet[3333]: I0325 01:19:16.972577 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8znxp\" (UniqueName: \"kubernetes.io/projected/1b615df6-c548-4d51-a47d-f0c70a2c7e42-kube-api-access-8znxp\") pod \"calico-typha-7968c68487-4lhhh\" (UID: \"1b615df6-c548-4d51-a47d-f0c70a2c7e42\") " pod="calico-system/calico-typha-7968c68487-4lhhh" Mar 25 01:19:17.143587 containerd[1757]: time="2025-03-25T01:19:17.143508074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7968c68487-4lhhh,Uid:1b615df6-c548-4d51-a47d-f0c70a2c7e42,Namespace:calico-system,Attempt:0,}" Mar 25 01:19:17.193142 systemd[1]: cri-containerd-e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11.scope: Deactivated successfully. Mar 25 01:19:17.195245 systemd[1]: cri-containerd-e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11.scope: Consumed 535ms CPU time, 66.7M memory peak, 36.2M read from disk. Mar 25 01:19:17.202641 containerd[1757]: time="2025-03-25T01:19:17.202466892Z" level=info msg="received exit event container_id:\"e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11\" id:\"e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11\" pid:5708 exited_at:{seconds:1742865557 nanos:198748654}" Mar 25 01:19:17.205772 containerd[1757]: time="2025-03-25T01:19:17.205713611Z" level=info msg="connecting to shim 4bdcfc9e2e95ef4897699b43339701d800a6b43963616a74816a3edcb0757380" address="unix:///run/containerd/s/2e693c4eb9725b1d116c0a8cc19e536639767b951b61502aa3c40230891cffde" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:19:17.228122 containerd[1757]: time="2025-03-25T01:19:17.228081363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11\" id:\"e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11\" pid:5708 exited_at:{seconds:1742865557 nanos:198748654}" Mar 25 01:19:17.248902 systemd[1]: Started cri-containerd-4bdcfc9e2e95ef4897699b43339701d800a6b43963616a74816a3edcb0757380.scope - libcontainer container 4bdcfc9e2e95ef4897699b43339701d800a6b43963616a74816a3edcb0757380. Mar 25 01:19:17.261763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e988adc949a95412ed11fcb54d17a559336ac546fdca920775b303bd1ccfdb11-rootfs.mount: Deactivated successfully. Mar 25 01:19:17.341468 containerd[1757]: time="2025-03-25T01:19:17.340360923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7968c68487-4lhhh,Uid:1b615df6-c548-4d51-a47d-f0c70a2c7e42,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bdcfc9e2e95ef4897699b43339701d800a6b43963616a74816a3edcb0757380\"" Mar 25 01:19:17.353738 containerd[1757]: time="2025-03-25T01:19:17.353129199Z" level=info msg="CreateContainer within sandbox \"4bdcfc9e2e95ef4897699b43339701d800a6b43963616a74816a3edcb0757380\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 25 01:19:17.378969 containerd[1757]: time="2025-03-25T01:19:17.378806069Z" level=info msg="Container ae4298c2d41c4a100c3f28fd4a7bc62a5fbb5ef6e3d6a20513e52ae9c22e087a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:17.395753 containerd[1757]: time="2025-03-25T01:19:17.395611023Z" level=info msg="CreateContainer within sandbox \"4bdcfc9e2e95ef4897699b43339701d800a6b43963616a74816a3edcb0757380\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae4298c2d41c4a100c3f28fd4a7bc62a5fbb5ef6e3d6a20513e52ae9c22e087a\"" Mar 25 01:19:17.396883 containerd[1757]: time="2025-03-25T01:19:17.396382183Z" level=info msg="StartContainer for \"ae4298c2d41c4a100c3f28fd4a7bc62a5fbb5ef6e3d6a20513e52ae9c22e087a\"" Mar 25 01:19:17.397867 containerd[1757]: time="2025-03-25T01:19:17.397742143Z" level=info msg="connecting to shim ae4298c2d41c4a100c3f28fd4a7bc62a5fbb5ef6e3d6a20513e52ae9c22e087a" address="unix:///run/containerd/s/2e693c4eb9725b1d116c0a8cc19e536639767b951b61502aa3c40230891cffde" protocol=ttrpc version=3 Mar 25 01:19:17.424945 systemd[1]: Started cri-containerd-ae4298c2d41c4a100c3f28fd4a7bc62a5fbb5ef6e3d6a20513e52ae9c22e087a.scope - libcontainer container ae4298c2d41c4a100c3f28fd4a7bc62a5fbb5ef6e3d6a20513e52ae9c22e087a. Mar 25 01:19:17.446416 containerd[1757]: time="2025-03-25T01:19:17.444875966Z" level=info msg="CreateContainer within sandbox \"af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 01:19:17.473170 containerd[1757]: time="2025-03-25T01:19:17.473113156Z" level=info msg="Container 23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:17.492648 containerd[1757]: time="2025-03-25T01:19:17.490592629Z" level=info msg="CreateContainer within sandbox \"af9faea14d5e97f42a0aa8d65eeecab0921e310a59827dfbe41b324663a95668\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\"" Mar 25 01:19:17.492648 containerd[1757]: time="2025-03-25T01:19:17.491418149Z" level=info msg="StartContainer for \"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\"" Mar 25 01:19:17.494650 containerd[1757]: time="2025-03-25T01:19:17.494558148Z" level=info msg="connecting to shim 23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496" address="unix:///run/containerd/s/b6c7eba8a036f74fb52ab6a76291079f9df7942dfbcfdfae3ca6f1c61765c401" protocol=ttrpc version=3 Mar 25 01:19:17.517810 systemd[1]: Started cri-containerd-23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496.scope - libcontainer container 23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496. Mar 25 01:19:17.539983 containerd[1757]: time="2025-03-25T01:19:17.539516492Z" level=info msg="StartContainer for \"ae4298c2d41c4a100c3f28fd4a7bc62a5fbb5ef6e3d6a20513e52ae9c22e087a\" returns successfully" Mar 25 01:19:17.583625 containerd[1757]: time="2025-03-25T01:19:17.583554156Z" level=info msg="StartContainer for \"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\" returns successfully" Mar 25 01:19:17.781848 systemd[1]: Created slice kubepods-besteffort-pod7102fdb7_6fdd_433a_8adb_af740be48f42.slice - libcontainer container kubepods-besteffort-pod7102fdb7_6fdd_433a_8adb_af740be48f42.slice. Mar 25 01:19:17.880565 kubelet[3333]: I0325 01:19:17.880521 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbsj6\" (UniqueName: \"kubernetes.io/projected/7102fdb7-6fdd-433a-8adb-af740be48f42-kube-api-access-hbsj6\") pod \"calico-kube-controllers-77c746f987-bf7s5\" (UID: \"7102fdb7-6fdd-433a-8adb-af740be48f42\") " pod="calico-system/calico-kube-controllers-77c746f987-bf7s5" Mar 25 01:19:17.880565 kubelet[3333]: I0325 01:19:17.880570 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7102fdb7-6fdd-433a-8adb-af740be48f42-tigera-ca-bundle\") pod \"calico-kube-controllers-77c746f987-bf7s5\" (UID: \"7102fdb7-6fdd-433a-8adb-af740be48f42\") " pod="calico-system/calico-kube-controllers-77c746f987-bf7s5" Mar 25 01:19:18.086955 containerd[1757]: time="2025-03-25T01:19:18.086846096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c746f987-bf7s5,Uid:7102fdb7-6fdd-433a-8adb-af740be48f42,Namespace:calico-system,Attempt:0,}" Mar 25 01:19:18.234653 systemd-networkd[1452]: cali9727ce47feb: Link UP Mar 25 01:19:18.235316 systemd-networkd[1452]: cali9727ce47feb: Gained carrier Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.162 [INFO][5878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0 calico-kube-controllers-77c746f987- calico-system 7102fdb7-6fdd-433a-8adb-af740be48f42 1031 0 2025-03-25 01:19:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77c746f987 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-a-f1ebfb6c0b calico-kube-controllers-77c746f987-bf7s5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9727ce47feb [] []}} ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.162 [INFO][5878] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.185 [INFO][5890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" HandleID="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.196 [INFO][5890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" HandleID="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031ba60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-a-f1ebfb6c0b", "pod":"calico-kube-controllers-77c746f987-bf7s5", "timestamp":"2025-03-25 01:19:18.185130621 +0000 UTC"}, Hostname:"ci-4284.0.0-a-f1ebfb6c0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.197 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.197 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.197 [INFO][5890] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-f1ebfb6c0b' Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.199 [INFO][5890] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.203 [INFO][5890] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.207 [INFO][5890] ipam/ipam.go 489: Trying affinity for 192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.209 [INFO][5890] ipam/ipam.go 155: Attempting to load block cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.212 [INFO][5890] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.212 [INFO][5890] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.214 [INFO][5890] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0 Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.219 [INFO][5890] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.230 [INFO][5890] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.74.7/26] block=192.168.74.0/26 handle="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.230 [INFO][5890] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.74.7/26] handle="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" host="ci-4284.0.0-a-f1ebfb6c0b" Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.230 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:19:18.258822 containerd[1757]: 2025-03-25 01:19:18.230 [INFO][5890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.7/26] IPv6=[] ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" HandleID="k8s-pod-network.08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" Mar 25 01:19:18.259355 containerd[1757]: 2025-03-25 01:19:18.232 [INFO][5878] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0", GenerateName:"calico-kube-controllers-77c746f987-", Namespace:"calico-system", SelfLink:"", UID:"7102fdb7-6fdd-433a-8adb-af740be48f42", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 19, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c746f987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"", Pod:"calico-kube-controllers-77c746f987-bf7s5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9727ce47feb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:19:18.259355 containerd[1757]: 2025-03-25 01:19:18.232 [INFO][5878] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.74.7/32] ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" Mar 25 01:19:18.259355 containerd[1757]: 2025-03-25 01:19:18.232 [INFO][5878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9727ce47feb ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" Mar 25 01:19:18.259355 containerd[1757]: 2025-03-25 01:19:18.235 [INFO][5878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" Mar 25 01:19:18.259355 containerd[1757]: 2025-03-25 01:19:18.236 [INFO][5878] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0", GenerateName:"calico-kube-controllers-77c746f987-", Namespace:"calico-system", SelfLink:"", UID:"7102fdb7-6fdd-433a-8adb-af740be48f42", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 19, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c746f987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-f1ebfb6c0b", ContainerID:"08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0", Pod:"calico-kube-controllers-77c746f987-bf7s5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9727ce47feb", MAC:"5a:ff:dd:52:09:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:19:18.259355 containerd[1757]: 2025-03-25 01:19:18.254 [INFO][5878] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" Namespace="calico-system" Pod="calico-kube-controllers-77c746f987-bf7s5" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--77c746f987--bf7s5-eth0" Mar 25 01:19:18.313294 containerd[1757]: time="2025-03-25T01:19:18.313071375Z" level=info msg="connecting to shim 08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0" address="unix:///run/containerd/s/823024806b046ec7d68421332e3fdfdbeadc98515bacb7fd46d0cad31c825e57" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:19:18.339354 systemd[1]: Started cri-containerd-08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0.scope - libcontainer container 08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0. Mar 25 01:19:18.383881 containerd[1757]: time="2025-03-25T01:19:18.383812109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c746f987-bf7s5,Uid:7102fdb7-6fdd-433a-8adb-af740be48f42,Namespace:calico-system,Attempt:0,} returns sandbox id \"08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0\"" Mar 25 01:19:18.398217 containerd[1757]: time="2025-03-25T01:19:18.397808664Z" level=info msg="CreateContainer within sandbox \"08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 25 01:19:18.425665 containerd[1757]: time="2025-03-25T01:19:18.425593694Z" level=info msg="Container 9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:18.447755 containerd[1757]: time="2025-03-25T01:19:18.447043607Z" level=info msg="CreateContainer within sandbox \"08317527061496fa879a89ba5447bed6290d9dc04fa93b5bd44bff73fc40cbe0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\"" Mar 25 01:19:18.454685 containerd[1757]: time="2025-03-25T01:19:18.453636644Z" level=info msg="StartContainer for \"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\"" Mar 25 01:19:18.457502 containerd[1757]: time="2025-03-25T01:19:18.457461163Z" level=info msg="connecting to shim 9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508" address="unix:///run/containerd/s/823024806b046ec7d68421332e3fdfdbeadc98515bacb7fd46d0cad31c825e57" protocol=ttrpc version=3 Mar 25 01:19:18.485757 systemd[1]: Started cri-containerd-9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508.scope - libcontainer container 9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508. Mar 25 01:19:18.493591 kubelet[3333]: I0325 01:19:18.493536 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7968c68487-4lhhh" podStartSLOduration=5.49351235 podStartE2EDuration="5.49351235s" podCreationTimestamp="2025-03-25 01:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:19:18.467525119 +0000 UTC m=+74.475855433" watchObservedRunningTime="2025-03-25 01:19:18.49351235 +0000 UTC m=+74.501842704" Mar 25 01:19:18.517719 kubelet[3333]: I0325 01:19:18.517102 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lpv54" podStartSLOduration=4.517081742 podStartE2EDuration="4.517081742s" podCreationTimestamp="2025-03-25 01:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:19:18.495297349 +0000 UTC m=+74.503627703" watchObservedRunningTime="2025-03-25 01:19:18.517081742 +0000 UTC m=+74.525412096" Mar 25 01:19:18.603191 containerd[1757]: time="2025-03-25T01:19:18.602900311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\" id:\"b3b155ec5480ad6d70507a6629eaf38d15af5cf645a3dc5c1715875e905744ea\" pid:5980 exit_status:1 exited_at:{seconds:1742865558 nanos:602607631}" Mar 25 01:19:18.623634 containerd[1757]: time="2025-03-25T01:19:18.623575024Z" level=info msg="StartContainer for \"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" returns successfully" Mar 25 01:19:19.597809 containerd[1757]: time="2025-03-25T01:19:19.597744355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" id:\"074b13b819e47b2c3fda1d6c5c9098f44c6f53d87565b8e4d269b94123925975\" pid:6175 exit_status:1 exited_at:{seconds:1742865559 nanos:595448676}" Mar 25 01:19:19.652955 containerd[1757]: time="2025-03-25T01:19:19.652836735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\" id:\"ce794e0b51b80edb87d59e6b8bcaacae662692dfb9209e6381985763a0258fd5\" pid:6172 exit_status:1 exited_at:{seconds:1742865559 nanos:651935215}" Mar 25 01:19:19.712779 systemd-networkd[1452]: cali9727ce47feb: Gained IPv6LL Mar 25 01:19:20.517569 containerd[1757]: time="2025-03-25T01:19:20.517399465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" id:\"6980a0c77954cb0ae84d4cbb25977e5f7a88a1b1c1e6b1fce560e39111564dd7\" pid:6274 exit_status:1 exited_at:{seconds:1742865560 nanos:517095425}" Mar 25 01:19:20.550483 containerd[1757]: time="2025-03-25T01:19:20.550438493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\" id:\"1666036663213b4f12fa008eddb67d58a8539d53bf249ac71b7c480b2a9e723b\" pid:6271 exit_status:1 exited_at:{seconds:1742865560 nanos:550067213}" Mar 25 01:19:21.519519 containerd[1757]: time="2025-03-25T01:19:21.519299666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" id:\"c33c1f92894c83b470b1b9ac9215aa137800986263214ea0122195ab6f04bf76\" pid:6308 exited_at:{seconds:1742865561 nanos:518672586}" Mar 25 01:19:21.537002 kubelet[3333]: I0325 01:19:21.536303 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77c746f987-bf7s5" podStartSLOduration=6.53628518 podStartE2EDuration="6.53628518s" podCreationTimestamp="2025-03-25 01:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:19:19.490501593 +0000 UTC m=+75.498831947" watchObservedRunningTime="2025-03-25 01:19:21.53628518 +0000 UTC m=+77.544615534" Mar 25 01:19:42.445129 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:57430.service - OpenSSH per-connection server daemon (10.200.16.10:57430). Mar 25 01:19:42.948494 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 57430 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:42.952182 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:42.960294 systemd-logind[1738]: New session 10 of user core. Mar 25 01:19:42.966252 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:19:43.407882 sshd[6352]: Connection closed by 10.200.16.10 port 57430 Mar 25 01:19:43.408328 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:43.412228 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:57430.service: Deactivated successfully. Mar 25 01:19:43.414370 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:19:43.416071 systemd-logind[1738]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:19:43.417447 systemd-logind[1738]: Removed session 10. Mar 25 01:19:48.494254 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:57438.service - OpenSSH per-connection server daemon (10.200.16.10:57438). Mar 25 01:19:48.986579 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 57438 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:48.988263 sshd-session[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:48.993338 systemd-logind[1738]: New session 11 of user core. Mar 25 01:19:49.001828 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:19:49.429648 sshd[6366]: Connection closed by 10.200.16.10 port 57438 Mar 25 01:19:49.431630 sshd-session[6364]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:49.437127 systemd-logind[1738]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:19:49.437302 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:57438.service: Deactivated successfully. Mar 25 01:19:49.439242 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:19:49.441997 systemd-logind[1738]: Removed session 11. Mar 25 01:19:50.532611 containerd[1757]: time="2025-03-25T01:19:50.532553455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\" id:\"1651b5a1a27d3b7a670c2396cbf5ffc157e8af5e9f4107031002df8c005b9277\" pid:6389 exited_at:{seconds:1742865590 nanos:531976895}" Mar 25 01:19:51.509397 containerd[1757]: time="2025-03-25T01:19:51.509333087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" id:\"a624483e9447bfd4d0725f6b8e310f493f1c9112f2eeb90be2081ad0ad7add70\" pid:6414 exited_at:{seconds:1742865591 nanos:508895967}" Mar 25 01:19:54.516037 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:43324.service - OpenSSH per-connection server daemon (10.200.16.10:43324). Mar 25 01:19:54.977088 sshd[6426]: Accepted publickey for core from 10.200.16.10 port 43324 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:54.978387 sshd-session[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:54.982764 systemd-logind[1738]: New session 12 of user core. Mar 25 01:19:54.990753 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:19:55.409244 sshd[6428]: Connection closed by 10.200.16.10 port 43324 Mar 25 01:19:55.409736 sshd-session[6426]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:55.418074 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:43324.service: Deactivated successfully. Mar 25 01:19:55.422102 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:19:55.423904 systemd-logind[1738]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:19:55.424848 systemd-logind[1738]: Removed session 12. Mar 25 01:19:55.499981 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:43328.service - OpenSSH per-connection server daemon (10.200.16.10:43328). Mar 25 01:19:56.002100 sshd[6441]: Accepted publickey for core from 10.200.16.10 port 43328 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:56.003824 sshd-session[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:56.009280 systemd-logind[1738]: New session 13 of user core. Mar 25 01:19:56.018789 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:19:56.452687 sshd[6443]: Connection closed by 10.200.16.10 port 43328 Mar 25 01:19:56.453034 sshd-session[6441]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:56.456736 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:43328.service: Deactivated successfully. Mar 25 01:19:56.458459 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:19:56.459133 systemd-logind[1738]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:19:56.459938 systemd-logind[1738]: Removed session 13. Mar 25 01:19:56.534892 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:43334.service - OpenSSH per-connection server daemon (10.200.16.10:43334). Mar 25 01:19:56.998987 sshd[6453]: Accepted publickey for core from 10.200.16.10 port 43334 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:19:56.999544 sshd-session[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:19:57.006078 systemd-logind[1738]: New session 14 of user core. Mar 25 01:19:57.013789 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:19:57.381346 sshd[6455]: Connection closed by 10.200.16.10 port 43334 Mar 25 01:19:57.381127 sshd-session[6453]: pam_unix(sshd:session): session closed for user core Mar 25 01:19:57.384779 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:43334.service: Deactivated successfully. Mar 25 01:19:57.386545 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:19:57.389222 systemd-logind[1738]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:19:57.391126 systemd-logind[1738]: Removed session 14. Mar 25 01:20:02.464817 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:35956.service - OpenSSH per-connection server daemon (10.200.16.10:35956). Mar 25 01:20:02.920881 sshd[6478]: Accepted publickey for core from 10.200.16.10 port 35956 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:02.923054 sshd-session[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:02.927328 systemd-logind[1738]: New session 15 of user core. Mar 25 01:20:02.935815 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:20:03.330646 sshd[6480]: Connection closed by 10.200.16.10 port 35956 Mar 25 01:20:03.331238 sshd-session[6478]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:03.334748 systemd-logind[1738]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:20:03.335797 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:35956.service: Deactivated successfully. Mar 25 01:20:03.340088 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:20:03.341473 systemd-logind[1738]: Removed session 15. Mar 25 01:20:04.147137 containerd[1757]: time="2025-03-25T01:20:04.147073204Z" level=info msg="StopPodSandbox for \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\"" Mar 25 01:20:04.147711 containerd[1757]: time="2025-03-25T01:20:04.147210804Z" level=info msg="TearDown network for sandbox \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" successfully" Mar 25 01:20:04.147711 containerd[1757]: time="2025-03-25T01:20:04.147221084Z" level=info msg="StopPodSandbox for \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" returns successfully" Mar 25 01:20:04.148559 containerd[1757]: time="2025-03-25T01:20:04.147949403Z" level=info msg="RemovePodSandbox for \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\"" Mar 25 01:20:04.148559 containerd[1757]: time="2025-03-25T01:20:04.147976643Z" level=info msg="Forcibly stopping sandbox \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\"" Mar 25 01:20:04.148559 containerd[1757]: time="2025-03-25T01:20:04.148049163Z" level=info msg="TearDown network for sandbox \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" successfully" Mar 25 01:20:04.155648 containerd[1757]: time="2025-03-25T01:20:04.155612441Z" level=info msg="Ensure that sandbox e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc in task-service has been cleanup successfully" Mar 25 01:20:04.168171 containerd[1757]: time="2025-03-25T01:20:04.168126756Z" level=info msg="RemovePodSandbox \"e17e6d437681e299ce52aa70256d0c0993f42e8ab1220bb29b840e1fc7da10fc\" returns successfully" Mar 25 01:20:04.168809 containerd[1757]: time="2025-03-25T01:20:04.168782076Z" level=info msg="StopPodSandbox for \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\"" Mar 25 01:20:04.168916 containerd[1757]: time="2025-03-25T01:20:04.168892396Z" level=info msg="TearDown network for sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" successfully" Mar 25 01:20:04.168916 containerd[1757]: time="2025-03-25T01:20:04.168910996Z" level=info msg="StopPodSandbox for \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" returns successfully" Mar 25 01:20:04.170559 containerd[1757]: time="2025-03-25T01:20:04.169172316Z" level=info msg="RemovePodSandbox for \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\"" Mar 25 01:20:04.170559 containerd[1757]: time="2025-03-25T01:20:04.169196236Z" level=info msg="Forcibly stopping sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\"" Mar 25 01:20:04.170559 containerd[1757]: time="2025-03-25T01:20:04.169263396Z" level=info msg="TearDown network for sandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" successfully" Mar 25 01:20:04.170825 containerd[1757]: time="2025-03-25T01:20:04.170803275Z" level=info msg="Ensure that sandbox 7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb in task-service has been cleanup successfully" Mar 25 01:20:04.179511 containerd[1757]: time="2025-03-25T01:20:04.179468392Z" level=info msg="RemovePodSandbox \"7b08991c6a14f6e7a6f913c195bf1325a677ddacda527262ece03c581530bcbb\" returns successfully" Mar 25 01:20:04.179938 containerd[1757]: time="2025-03-25T01:20:04.179911392Z" level=info msg="StopPodSandbox for \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\"" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.225 [WARNING][6505] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.225 [INFO][6505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.225 [INFO][6505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" iface="eth0" netns="" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.225 [INFO][6505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.225 [INFO][6505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.249 [INFO][6513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.249 [INFO][6513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.249 [INFO][6513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.258 [WARNING][6513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.258 [INFO][6513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.259 [INFO][6513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:20:04.262532 containerd[1757]: 2025-03-25 01:20:04.260 [INFO][6505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.262532 containerd[1757]: time="2025-03-25T01:20:04.262402004Z" level=info msg="TearDown network for sandbox \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" successfully" Mar 25 01:20:04.262532 containerd[1757]: time="2025-03-25T01:20:04.262424004Z" level=info msg="StopPodSandbox for \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" returns successfully" Mar 25 01:20:04.263660 containerd[1757]: time="2025-03-25T01:20:04.263294324Z" level=info msg="RemovePodSandbox for \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\"" Mar 25 01:20:04.263660 containerd[1757]: time="2025-03-25T01:20:04.263325564Z" level=info msg="Forcibly stopping sandbox \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\"" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.295 [WARNING][6531] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" WorkloadEndpoint="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.295 [INFO][6531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.295 [INFO][6531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" iface="eth0" netns="" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.295 [INFO][6531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.295 [INFO][6531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.314 [INFO][6538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.315 [INFO][6538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.315 [INFO][6538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.324 [WARNING][6538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.324 [INFO][6538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" HandleID="k8s-pod-network.230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Workload="ci--4284.0.0--a--f1ebfb6c0b-k8s-calico--kube--controllers--5f856dbd65--p9hgt-eth0" Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.326 [INFO][6538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:20:04.329626 containerd[1757]: 2025-03-25 01:20:04.327 [INFO][6531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2" Mar 25 01:20:04.331049 containerd[1757]: time="2025-03-25T01:20:04.330070340Z" level=info msg="TearDown network for sandbox \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" successfully" Mar 25 01:20:04.331815 containerd[1757]: time="2025-03-25T01:20:04.331788060Z" level=info msg="Ensure that sandbox 230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2 in task-service has been cleanup successfully" Mar 25 01:20:04.343987 containerd[1757]: time="2025-03-25T01:20:04.343942576Z" level=info msg="RemovePodSandbox \"230d9e58b48a3e8ad15a707c253c511b0dd36dbcd3483b5632c6ae12c396afe2\" returns successfully" Mar 25 01:20:08.415152 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:35970.service - OpenSSH per-connection server daemon (10.200.16.10:35970). Mar 25 01:20:08.885050 sshd[6545]: Accepted publickey for core from 10.200.16.10 port 35970 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:08.885489 sshd-session[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:08.889620 systemd-logind[1738]: New session 16 of user core. Mar 25 01:20:08.897780 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:20:09.284765 sshd[6547]: Connection closed by 10.200.16.10 port 35970 Mar 25 01:20:09.284058 sshd-session[6545]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:09.287572 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:35970.service: Deactivated successfully. Mar 25 01:20:09.289587 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:20:09.291173 systemd-logind[1738]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:20:09.292055 systemd-logind[1738]: Removed session 16. Mar 25 01:20:14.381419 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:34590.service - OpenSSH per-connection server daemon (10.200.16.10:34590). Mar 25 01:20:14.881621 sshd[6560]: Accepted publickey for core from 10.200.16.10 port 34590 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:14.883376 sshd-session[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:14.888910 systemd-logind[1738]: New session 17 of user core. Mar 25 01:20:14.892799 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:20:15.367632 sshd[6562]: Connection closed by 10.200.16.10 port 34590 Mar 25 01:20:15.367961 sshd-session[6560]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:15.371356 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:34590.service: Deactivated successfully. Mar 25 01:20:15.373066 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:20:15.374738 systemd-logind[1738]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:20:15.376222 systemd-logind[1738]: Removed session 17. Mar 25 01:20:18.121194 containerd[1757]: time="2025-03-25T01:20:18.121149643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" id:\"1164d438e5c601fa8a9acdebd64c178320ddc37d73a611d2c7d0faf389c4f4d3\" pid:6586 exited_at:{seconds:1742865618 nanos:120843403}" Mar 25 01:20:20.449129 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:48386.service - OpenSSH per-connection server daemon (10.200.16.10:48386). Mar 25 01:20:20.531964 containerd[1757]: time="2025-03-25T01:20:20.531916187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\" id:\"0b66d689f1ae3ada68a013bd64fac1b76055eee72c1d91922ce9095129844f93\" pid:6611 exited_at:{seconds:1742865620 nanos:531561107}" Mar 25 01:20:20.906577 sshd[6596]: Accepted publickey for core from 10.200.16.10 port 48386 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:20.910805 sshd-session[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:20.917231 systemd-logind[1738]: New session 18 of user core. Mar 25 01:20:20.923097 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:20:21.336590 sshd[6623]: Connection closed by 10.200.16.10 port 48386 Mar 25 01:20:21.341045 sshd-session[6596]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:21.346359 systemd-logind[1738]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:20:21.346926 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:48386.service: Deactivated successfully. Mar 25 01:20:21.349296 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:20:21.351713 systemd-logind[1738]: Removed session 18. Mar 25 01:20:21.424225 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:48394.service - OpenSSH per-connection server daemon (10.200.16.10:48394). Mar 25 01:20:21.505866 containerd[1757]: time="2025-03-25T01:20:21.505816881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" id:\"5b1818e776c5d96b010fdf964e68b44af3bbfac5fd0a9c05d5ab93e9aa56da05\" pid:6648 exited_at:{seconds:1742865621 nanos:505373681}" Mar 25 01:20:21.914911 sshd[6635]: Accepted publickey for core from 10.200.16.10 port 48394 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:21.916291 sshd-session[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:21.920947 systemd-logind[1738]: New session 19 of user core. Mar 25 01:20:21.929824 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:20:22.444018 sshd[6658]: Connection closed by 10.200.16.10 port 48394 Mar 25 01:20:22.444326 sshd-session[6635]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:22.447573 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:48394.service: Deactivated successfully. Mar 25 01:20:22.449423 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:20:22.451857 systemd-logind[1738]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:20:22.453443 systemd-logind[1738]: Removed session 19. Mar 25 01:20:22.533213 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:48400.service - OpenSSH per-connection server daemon (10.200.16.10:48400). Mar 25 01:20:23.024881 sshd[6667]: Accepted publickey for core from 10.200.16.10 port 48400 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:23.026387 sshd-session[6667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:23.031146 systemd-logind[1738]: New session 20 of user core. Mar 25 01:20:23.037778 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:20:24.236171 sshd[6669]: Connection closed by 10.200.16.10 port 48400 Mar 25 01:20:24.239890 sshd-session[6667]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:24.243879 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:48400.service: Deactivated successfully. Mar 25 01:20:24.245640 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:20:24.246373 systemd-logind[1738]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:20:24.247412 systemd-logind[1738]: Removed session 20. Mar 25 01:20:24.325857 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:48406.service - OpenSSH per-connection server daemon (10.200.16.10:48406). Mar 25 01:20:24.820753 sshd[6687]: Accepted publickey for core from 10.200.16.10 port 48406 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:24.822782 sshd-session[6687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:24.828438 systemd-logind[1738]: New session 21 of user core. Mar 25 01:20:24.834789 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:20:25.355590 sshd[6689]: Connection closed by 10.200.16.10 port 48406 Mar 25 01:20:25.355970 sshd-session[6687]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:25.360257 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:48406.service: Deactivated successfully. Mar 25 01:20:25.363270 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:20:25.364344 systemd-logind[1738]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:20:25.365229 systemd-logind[1738]: Removed session 21. Mar 25 01:20:25.443676 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:48416.service - OpenSSH per-connection server daemon (10.200.16.10:48416). Mar 25 01:20:25.935557 sshd[6699]: Accepted publickey for core from 10.200.16.10 port 48416 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:25.937140 sshd-session[6699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:25.944423 systemd-logind[1738]: New session 22 of user core. Mar 25 01:20:25.953802 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:20:26.354888 sshd[6701]: Connection closed by 10.200.16.10 port 48416 Mar 25 01:20:26.354786 sshd-session[6699]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:26.358840 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:48416.service: Deactivated successfully. Mar 25 01:20:26.361555 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:20:26.364033 systemd-logind[1738]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:20:26.365332 systemd-logind[1738]: Removed session 22. Mar 25 01:20:31.445245 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:48764.service - OpenSSH per-connection server daemon (10.200.16.10:48764). Mar 25 01:20:31.947439 sshd[6716]: Accepted publickey for core from 10.200.16.10 port 48764 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:31.947918 sshd-session[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:31.951950 systemd-logind[1738]: New session 23 of user core. Mar 25 01:20:31.959754 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:20:32.375149 sshd[6718]: Connection closed by 10.200.16.10 port 48764 Mar 25 01:20:32.375758 sshd-session[6716]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:32.379253 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:48764.service: Deactivated successfully. Mar 25 01:20:32.381485 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:20:32.383508 systemd-logind[1738]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:20:32.385163 systemd-logind[1738]: Removed session 23. Mar 25 01:20:37.459942 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:48766.service - OpenSSH per-connection server daemon (10.200.16.10:48766). Mar 25 01:20:37.920945 sshd[6729]: Accepted publickey for core from 10.200.16.10 port 48766 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:37.922343 sshd-session[6729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:37.926586 systemd-logind[1738]: New session 24 of user core. Mar 25 01:20:37.932766 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:20:38.319794 sshd[6731]: Connection closed by 10.200.16.10 port 48766 Mar 25 01:20:38.320517 sshd-session[6729]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:38.324559 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:48766.service: Deactivated successfully. Mar 25 01:20:38.327578 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:20:38.328583 systemd-logind[1738]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:20:38.329474 systemd-logind[1738]: Removed session 24. Mar 25 01:20:43.403184 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:50272.service - OpenSSH per-connection server daemon (10.200.16.10:50272). Mar 25 01:20:43.860018 sshd[6752]: Accepted publickey for core from 10.200.16.10 port 50272 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:43.861319 sshd-session[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:43.867054 systemd-logind[1738]: New session 25 of user core. Mar 25 01:20:43.871784 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:20:44.284768 sshd[6754]: Connection closed by 10.200.16.10 port 50272 Mar 25 01:20:44.286920 sshd-session[6752]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:44.290551 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:50272.service: Deactivated successfully. Mar 25 01:20:44.294587 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:20:44.295860 systemd-logind[1738]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:20:44.299306 systemd-logind[1738]: Removed session 25. Mar 25 01:20:49.372744 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:53268.service - OpenSSH per-connection server daemon (10.200.16.10:53268). Mar 25 01:20:49.863220 sshd[6766]: Accepted publickey for core from 10.200.16.10 port 53268 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:49.864537 sshd-session[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:49.868621 systemd-logind[1738]: New session 26 of user core. Mar 25 01:20:49.874766 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:20:50.286768 sshd[6768]: Connection closed by 10.200.16.10 port 53268 Mar 25 01:20:50.288009 sshd-session[6766]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:50.292855 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:53268.service: Deactivated successfully. Mar 25 01:20:50.301427 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:20:50.302537 systemd-logind[1738]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:20:50.304886 systemd-logind[1738]: Removed session 26. Mar 25 01:20:50.537515 containerd[1757]: time="2025-03-25T01:20:50.537276670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23af9a8a118b49e4668aa68d3dfcb5d9f624c9dfc38075b333877d5196869496\" id:\"e0162095ef87f38c6be4b2096e783018f27085539aad0930ed129395c21aeac6\" pid:6794 exited_at:{seconds:1742865650 nanos:536758430}" Mar 25 01:20:51.511899 containerd[1757]: time="2025-03-25T01:20:51.511849161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9416cb77874cc4d7d473bc62b3e68224094c040cadd2e18323464f14189dc508\" id:\"29187da4f1d6ff7d53920b44ac932e5e13acebf39967d6f0ce2fd5ed77bfb6f6\" pid:6820 exited_at:{seconds:1742865651 nanos:511574601}" Mar 25 01:20:55.374404 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.16.10:53274.service - OpenSSH per-connection server daemon (10.200.16.10:53274). Mar 25 01:20:55.868633 sshd[6831]: Accepted publickey for core from 10.200.16.10 port 53274 ssh2: RSA SHA256:vQ2nTXxwrz0RrItxuIfyj0hHdDx3hjRZ0GYYdaWmGcM Mar 25 01:20:55.869251 sshd-session[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:20:55.874186 systemd-logind[1738]: New session 27 of user core. Mar 25 01:20:55.884830 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:20:56.298629 sshd[6835]: Connection closed by 10.200.16.10 port 53274 Mar 25 01:20:56.297427 sshd-session[6831]: pam_unix(sshd:session): session closed for user core Mar 25 01:20:56.302379 systemd-logind[1738]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:20:56.303473 systemd[1]: sshd@24-10.200.20.40:22-10.200.16.10:53274.service: Deactivated successfully. Mar 25 01:20:56.306125 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:20:56.308812 systemd-logind[1738]: Removed session 27.